51
60
submitted 1 week ago by [email protected] to c/[email protected]

NewsGuard audit finds that 32% of the time, leading AI chatbots spread Russian disinformation narratives created by John Mark Dougan, an American fugitive now operating from Moscow, citing his fake local news sites and fabricated claims on YouTube as reliable sources.

The audit tested 10 of the leading AI chatbots — OpenAI’s ChatGPT-4, You.com’s Smart Assistant, xAI’s Grok, Inflection’s Pi, Mistral’s le Chat, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, Google’s Gemini, and Perplexity’s answer engine. The prompts were based on 19 significant false narratives that NewsGuard linked to the Russian disinformation network: 152 of the 570 responses contained explicit disinformation, 29 responses repeated the false claim with a disclaimer, and 389 responses contained no misinformation — either because the chatbot refused to respond (144) or it provided a debunk (245).

The findings come amid the first election year featuring widespread use of artificial intelligence, as bad actors are weaponizing new publicly available technology to generate deepfakes, AI-generated news sites, and fake robocalls. The results demonstrate how, despite efforts by AI companies to prevent the misuse of their chatbots ahead of worldwide elections, AI remains a potent tool for propagating disinformation.

52
36
submitted 1 week ago by [email protected] to c/[email protected]
53
51
submitted 1 week ago by [email protected] to c/[email protected]
54
135
submitted 1 week ago by [email protected] to c/[email protected]
55
54
submitted 1 week ago* (last edited 1 week ago) by [email protected] to c/[email protected]

I dislike linking to the NYT, but it seems to be the original source.

I'm kinda conflicted on this. I doubt it'll really do anything, but if it helps head off crappy laws like SOPA then it'd be good.

tbh more social media should be like beehaw, anyway

56
77
submitted 1 week ago by [email protected] to c/[email protected]
57
52
submitted 1 week ago by [email protected] to c/[email protected]
58
35
submitted 1 week ago by [email protected] to c/[email protected]
59
157
submitted 1 week ago by [email protected] to c/[email protected]
60
12
submitted 1 week ago by [email protected] to c/[email protected]

Archived link

- A new petition started last week in Ukraine that aims to block TikTok in the country, arguing that its Chinese parent company Byte Dance is one of Russia’s partners and could pose a risk to Ukraine’s national security.

- The petition says that Chinese law allows companies to collect information about TikTok users that can subsequently be used for espionage and intelligence purposes, and that it would allow China to spread propaganda messages or launch algorithm-driven disinformation campaigns.

- The petition garnered about 9,000 signatures in the campaign’s first two days, and as of this article’s publication, it has nearly 11,000 supporters. To be officially considered by Ukrainian lawmakers, the document must receive a total of 25,000 signatures within three months.--

On June 10, a petition appeared on the website of Ukraine’s Cabinet of Ministers calling on the country’s authorities to block the video-sharing app TikTok. The document has already gathered nearly half of the signatures necessary for lawmakers to be required to consider it. It argues that because TikTok’s parent company, ByteDance, is Chinese, and China is one of Russia’s partners, the app could pose a threat to Ukraine’s national security. The initiative comes just two months after Washington gave the Chinese firm an ultimatum, giving it nine months to sell TikTok to an American company if it wants to avoid a block in the U.S. Here’s what we know about the campaign to ban TikTok in Ukraine.

A new petition published on the Ukrainian government’s website calls on the country’s lawmakers to block TikTok for the sake of national security. The document asserts that China openly collaborates with Russia and supports it in its war against Ukraine. It also says that Chinese law allows companies to collect information about TikTok users that can subsequently be used for espionage and intelligence purposes. Additionally, the author says that China has the ability to influence ByteDance’s content policy, including by using TikTok to spread propaganda messages or launch algorithm-driven disinformation campaigns.

The petition cites comments made by U.S. Assistant Secretary of Defense for Space Policy John Plumb about how China has purportedly used its cyber capabilities to steal confidential information from both public and private U.S. institutions, including its defense industrial base, for decades. It proposes blocking TikTok on Ukrainian territories and banning its use on phones belonging to state officials and military personnel.

The signature collection period for the petition began on June 10. The document’s author is listed as “Oksana Andrusyak,” though this person’s identity is unclear, and Ukrainian media have had difficulty determining who she is. Nonetheless, the petition garnered about 9,000 signatures in the campaign’s first two days, and as of this article’s publication, it has nearly 11,000 supporters. To be officially considered by Ukrainian lawmakers, the document must receive a total of 25,000 signatures within three months.

This isn’t the first time the Ukrainian authorities have discussed banning TikTok. In April 2024, people’s deputy Yaroslav Yurchyshyn, the head of the Verkhovna Rada’s Committee on Freedom of Speech, said in an interview with RBC-Ukraine that such a ban would be well-founded. “If our partner country imposes such sanctions, then so will we,” he told journalists, referring to the possibility of a TikTok ban in the U.S.

It’s currently unclear whether Ukrainian lawmakers already have plans to block TikTok. According to Forbes Ukraine, however, there is legislation in development that would impose new regulations on social media sites and messenger services, including TikTok.

61
66
submitted 1 week ago by [email protected] to c/[email protected]

Just sharing in case anybody has been waiting for an update from framework

62
55
submitted 1 week ago by [email protected] to c/[email protected]
63
27
submitted 1 week ago by [email protected] to c/[email protected]

Revelation of emails to Imperial College scientists comes amid growing concerns about security risk posed by academic tie-ups with China

A Chinese state-owned company sought to use a partnership with a leading British university in order to access AI technology for potential use in “smart military bases”, the Guardian has learned.

Emails show that China’s Jiangsu Automation Research Institute (Jari) discussed deploying software developed by scientists at Imperial College London for military use.

The company, which is the leading designer of China’s drone warships, shared this objective with two Imperial employees before signing a £3m deal with the university in 2019.

Ministers have spent the past year stepping up warnings about the potential security risk posed by academic collaborations with China, with MI5 telling vice-chancellors in April that hostile states are targeting sensitive research that can “deliver their authoritarian, military and commercial priorities”.

The former Conservative leader Iain Duncan Smith said: “Our universities are like lambs to the slaughter. They try to believe in independent scientific investigation, but in China it doesn’t work like that. What they’re doing is running a very significant risk.”

The Future Digital Ocean Innovation Centre was to be based at Imperial’s Data Science Institute, under the directorship of Prof Yike Guo. Guo left Imperial in late 2022 to become provost of the Hong Kong University of Science and Technology.

The centre’s stated goals were to advance maritime forecasting, computer vision and intelligent manufacturing “for civilian applications”. However, correspondence sent before the partnership was formalised suggests Jari was also considering military end-uses.

The emails were obtained through freedom of information request by the charity UK-China Transparency.

A Mandarin-language email from Jari’s research director to an Imperial College professor, whose name is redacted, and another Imperial employee, dated November 2018, states that a key Jari objective for the centre is testing whether software developed by Imperial’s Data Science Institute could be integrated into its own “JariPilot” technology to “form a more powerful product”.

Suggested applications are listed as “smart institutes, smart military bases and smart oceans”.

“Our research presents evidence of an attempt to link Imperial College London’s expertise and resources into China’s national military marine combat drone research programmes,” said Sam Dunning, the director of UK-China Transparency, which carried out the investigation.

"Partnerships such as this have taken place across the university sector. They together raise questions about whether British science faculties understand that China has become increasingly authoritarian and militarised under Xi Jinping, and that proper due diligence is required in dealings with this state.”

There appears to have been a launch event for the joint centre in September 2019 and funding from Jari is cited in Imperial’s annual summary in 2021 under prestigious industry grants it attracted.

However, the partnership was ultimately terminated in 2021. Imperial said no research went ahead and the £500,000 of funding that had been received was returned in October 2021 after discussion with government officials.

“Under Imperial’s policies, partnerships and collaborations are subject to due diligence and regular review,” an Imperial spokesperson said. “The decision to terminate the partnership was made after consideration of UK export control legislation and consultation with the government, taking into consideration national security concerns.”

Charles Parton, a China expert at the Royal United Services Institute (RUSI), said the partnership was “clearly highly inappropriate” and should never have been signed off.

“How much effort does it take to work out that Jari is producing military weapons that could be used in future against our naval forces?” Parton said. “These people should have been doing proper due diligence way before this. It’s not good enough, late in the day having signed the contract, to get permission from [government].”

At the time of the deal, Imperial’s Data Science Institute was led by Prof Guo, an internationally recognised AI researcher. A Channel 4 documentary last year revealed that Guo had written eight papers with Chinese collaborators at Shanghai University on missile design and using AI to control fleets of marine combat drones. Guo is no longer affiliated with Imperial.

Imperial received more than £18m in funding from Chinese military-linked institutes and companies between 2017 and 2022, but since then it has been forced to shut down several joint-ventures as government policy on scientific collaboration has hardened.

“Governments of all stripes have taken a long time to understand what the threat is from China and universities for a long period have got away with this,” said Duncan Smith, who has had sanctions imposed on him by China for criticising its government. “There’s been a progressive and slow tightening up, but it’s still not good enough. Universities need to be in lockstep with the security services.”

An Imperial College London spokesperson said: “Imperial takes its national security responsibilities very seriously. We regularly review our policies in line with evolving government guidance and legislation, working closely with the appropriate government departments, and in line with our commitments to UK national security.

“Imperial’s research is open and routinely published in leading international journals and we conduct no classified research on our campuses.”

Guo declined to comment on the Jari partnership, noting that he left Imperial at the end of 2022. Of his previous collaborations, he said that the papers were classified as “basic research” and were written to help advance scientific knowledge in a broad range of fields rather than solving specific, real-world problems.

64
136
submitted 2 weeks ago by [email protected] to c/[email protected]
65
165
submitted 2 weeks ago by [email protected] to c/[email protected]

Archived version

After five years of pioneering research into the abuse of social platforms, the Stanford Internet Observatory is winding down. Its founding director, Alex Stamos, left his position in November. Renee DiResta, its research director, left last week after her contract was not renewed. One other staff member's contract expired this month, while others have been told to look for jobs elsewhere, sources say.

Some members of the eight-person team might find other jobs at Stanford, and it’s possible that the university will retain the Stanford Internet Observatory branding, according to sources familiar with the matter. But the lab will not conduct research into the 2024 election or other elections in the future.

The shutdown comes amid a sustained and increasingly successful campaign among Republicans to discredit research institutions and discourage academics from investigating political speech and influence campaigns.

SIO and its researchers have been sued three times by conservative groups alleging that its researchers colluded illegally with the federal government to censor speech, forcing Stanford to spend millions of dollars to defend its staff and students.

In parallel, Republican House Judiciary Chairman Jim Jordan and his Orwellian “Subcommittee on the Weaponization of the Federal Government” have subpoenaed documents at Stanford and other universities, selectively leaked fragments of them to friendly conservative outlets, and misrepresented their contents in public statements.

And in an actual weaponization of government, Jordan’s committee has included students — both undergraduates and graduates — in its subpoena requests, publishing their names and putting them at risk of threats or worse.

The remnants of SIO will be reconstituted under Jeff Hancock, the lab’s faculty sponsor. Hancock, a professor of communication, runs a separate program known as the Stanford Social Media Lab. SIO’s work on child safety will continue there, sources said.

Two of SIO’s major initiatives — the peer-reviewed Journal of Online Trust and Safety and its Trust and Safety Research Conference — will also continue. (The journal is funded through a separate grant from the Omidyar Network.)

But in quietly dismantling SIO, the university seems to have calculated that the lab had become more trouble than it is worth.

In a statement emailed after publication, Stanford strongly disputed the fact that SIO is being dismantled. "The important work of SIO continues under new leadership, including its critical work on child safety and other online harms, its publication of the Journal of Online Trust and Safety, the Trust and Safety Research Conference, and the Trust and Safety Teaching Consortium," a spokesperson wrote. "Stanford remains deeply concerned about efforts, including lawsuits and congressional investigations, that chill freedom of inquiry and undermine legitimate and much needed academic research – both at Stanford and across academia."

66
24
Router scan (discuss.tchncs.de)
submitted 2 weeks ago by [email protected] to c/[email protected]

Today I scanned my router with routersploit. The scan ended and showed one vulnerability: eseries_themoon_rce.

I searched the internet and found that this is a vulnerability in Linksys E-Series routers. But I am not on linksys at all. And I didn't find anything about getting rid of it.

I'm wondering if someone knows how to make this vulnerability eliminate?

67
102
submitted 2 weeks ago by [email protected] to c/[email protected]

Mozilla has reinstated certain add-ons for Firefox that earlier this week had been banned in Russia by the Kremlin.

The browser extensions, which are hosted on the Mozilla store, were made unavailable in the Land of Putin on or around June 8 after a request by the Russian government and its internet censorship agency, Roskomnadzor.

Among those extensions were three pieces of code that were explicitly designed to circumvent state censorship – including a VPN and Censor Tracker, a multi-purpose add-on that allowed users to see what websites shared user data, and a tool to access Tor websites.

The day the ban went into effect, Roskomsvoboda – the developer of Censor Tracker – took to the official Mozilla forums and asked why his extension was suddenly banned in Russia with no warning.

"We recently noticed that our add-on is now unavailable in Russia, despite being developed specifically to circumvent censorship in Russia," dev zombbo complained. "We did not violate Mozilla's rules in any way, so this decision seems strange and unfair, to be honest."

Another developer for a banned add-on chimed in that they weren't informed either.

The internet org's statement at the time mentioned the ban was merely temporary. It turns out wasn't mere PR fluff, as Mozilla tells The Register that the ban has now been lifted.

"In alignment with our commitment to an open and accessible internet, Mozilla will reinstate previously restricted listings in Russia," the group declared. "Our initial decision to temporarily restrict these listings was made while we considered the regulatory environment in Russia and the potential risk to our community and staff.

"We remain committed to supporting our users in Russia and worldwide and will continue to advocate for an open and accessible internet for all."

Lifting the ban wasn't completely necessary for users to regain access to the add-ons – two of them were completely open source, and one of the VPN extensions could be downloaded from the developer's website.

68
25
submitted 2 weeks ago by [email protected] to c/[email protected]
69
135
submitted 2 weeks ago by [email protected] to c/[email protected]
70
93
submitted 2 weeks ago by [email protected] to c/[email protected]
71
29
submitted 2 weeks ago* (last edited 2 weeks ago) by [email protected] to c/[email protected]

oh shit!

this is gonna be a good one✨

72
35
submitted 2 weeks ago by [email protected] to c/[email protected]

Some of the world’s poorest countries have been investing heavily in digital ID systems which it is claimed will deliver democratic and development dividends. Africa has been at the forefront of this push supported by the World Bank, UN agencies and the international community. Some of Africa’s most fragile states have been encouraged to spend billions of dollars on biometric systems from national IDs to voting systems.

While Africa has become a lucrative market for multinational tech vendors, the promised benefits of trustworthy election results and a revolutionising of the way that states deliver vital services is far harder to discern.

At the 2024 ID4Africa trade fair in South Africa, the promises kept coming: economic growth, empowering individuals, reducing government spending, enabling trust and being a key tool in solving humanitarian crises.

The conference sponsors include a who’s who of companies that have benefited from contracts meant to confer legitimacy on electoral processes and unlock the potential of Africa’s demographic advantage over other ageing continents.

A legal identity is among the UN’s sustainable development goals, where it is defined as a fundamental human right. The drive to meet this goal has seen near-bankrupt states prioritise the capture and storage of biometric data from iris scans and fingerprints to facial images.

We set out to investigate what has become of the blockbuster deals struck in sub-Saharan Africa. What has actually been delivered? Who has benefited? How have they been financed? And how have people on the ground in those countries been affected?

Methods

As well as exploring the biometrics industry and how it has courted customers in a “frontier market” our investigation focused on a representative cross section of African countries where big tech investments have gone in three distinct directions.

In Uganda, where supposedly democratic elections have failed to deliver a change of government in four decades, we explored how a Chinese tech vendor provided biometric systems which have become the foundations for a surveillance state.

In Mozambique, we probe the worsening conduct of elections in a fragile democracy. The gas-rich nation is beset by rising poverty and a brutal counter insurgency, but its ballooning biometrics costs have failed to breed confidence in democracy.

In the Democratic Republic of Congo, we investigate a succession of phantom biometrics deals which have seen billions of dollars committed on paper but have so far failed to deliver a national population registry or any functioning ID cards across successive governments.

Working with partners, Bloomberg, over the course of nine months, the team combined in-depth ground reporting with expert interviews and accounts from confidential sources to reconstruct deals in the three countries from tender process to societal fallout. In support of these testimonies, we analysed thousands of pages of documents, ranging from bank records and business registries to unpublished contracts and correspondence between governments, vendors and middle men.

The result is the most detailed account yet of the failed promise of biometric technologies and one that looks at the accompanying harms for affected communities, as well as wrongdoing by several companies and individuals.

Storylines

In Uganda, where a national ID system ought to be a success story, we find it feeding a sweeping surveillance state built in cooperation with China’s Huawei. Nick Opiyo, one of East Africa’s leading human rights lawyers, who has defended victims of government crackdowns, has been a victim of widespread digital surveillance.

A succession of biometric tools have become central to many of the day to day functions of the state and also a powerful mechanism for surveilling politicians, journalists, human rights defenders and ordinary citizens.

A $126 million deal with Huawei has given Uganda the capacity to deploy facial and number plate recognition technology, as well as AI capabilities. Sensitive personal data, required to register a SIM card or make a bank transaction, can be accessed at will by state actors with no due process.

“There’s almost no confidentiality in my work any more,” Opiyo told Bloomberg. “There’s pervasive fear and self censorship.”

The second and third stories in the series will follow.

73
21
submitted 2 weeks ago by [email protected] to c/[email protected]

Microsoft President Brad Smith fielded questions about the tech giant's security practices and ties to China at a House homeland security panel on Thursday, a year after alleged China-linked hackers spied on federal emails by hacking the firm.

The hackers accessed 60,000 U.S. State Department emails by breaking into Microsoft's systems last summer, while Russia-linked cybercriminals separately spied on Microsoft's senior staff emails this year, according to the company's disclosures.

The congressional hearing comes amid increasing federal scrutiny over Microsoft, the world's biggest software-maker, which is also a key vendor to the U.S. government and national security establishment. Microsoft's business accounts for around 3% of the U.S. federal IT budget, Smith said at the hearing.

Lawmakers grilled Microsoft for its inability to prevent both the Russian and Chinese hacks, which they said put federal networks at risk despite not using sophisticated means.

The company emails Russian hackers accessed also "included correspondence with government officials," Democrat Bennie Thompson said.

"**Microsoft is one of the federal government's most important technology and security partners, **but we cannot afford to allow the importance of that relationship to enable complacency or interfere with our oversight," he added.

Lawmakers drew on the findings of a scathing report in April by the Cyber Safety Review Board (CSRB) - a group of experts formed by U.S. Secretary of Homeland Security Alejandro Mayorkas - which slammed Microsoft for its lack of transparency over the China hack, calling it preventable.

"We accept responsibility for each and every finding in the CSRB report," Smith said at the hearing, adding that Microsoft had begun acting on a majority of the report's recommendations.

"We're dealing with formidable foes in China, Russia, North Korea, Iran, and they're getting better," said Smith. "They're getting more aggressive ... They're waging attacks at an extraordinary rate."

Thompson criticised Smith's company for failing to detect the hack, which was discovered instead by the U.S. State Department. Smith responded saying: "That's the way it should work. No one entity in the ecosystem can see everything."

But Congressman Thompson was not convinced.

"It's not our job to find the culprits. That's what we're paying you for," Thompson said.

Panel members also probed Smith for details on Microsoft's business in China, noting that it had invested heavily in setting up research incentives there.

"Microsoft's presence in China creates a mix of complex challenges and risks," said Congressman Mark Green from Mississippi, who chaired the panel.

Microsoft earns around 1.5% of its revenue from China and is working to reduce its engineering presence there, said Smith.

The company has faced heightened criticism from its security industry peers over the past year over the breaches and lack of transparency.

Smith's responses at the hearing earned praise from some on the panel, such as Republican Congresswoman Marjorie Taylor Greene. "You said you accept a responsibility, and I just want to commend you for that," Greene told him.

Following the board's criticisms, Microsoft had said it was working on improving its processes and enforcing security benchmarks. In November it launched a new cybersecurity initiative and said it was making security the company's top priority "above all else - over all other features."

74
58
submitted 2 weeks ago by [email protected] to c/[email protected]

If Apple aren't paying OpenAI and OpenAI aren't playing Apple, it means that consumers are paying both.

75
58
submitted 2 weeks ago by [email protected] to c/[email protected]

Elon is the gift that keeps on giving. He's decided that because it's Friday, we should all have a pile in.

On a less scornful and more serious note. If he could get a working prototype up, it would be a good thing. Though I suspect that he along with all the other stupidly rich people would go out of their way to vote against providing parachute policies for the economy such as UBI for all the displaced employees.

view more: ‹ prev next ›

Technology

37340 readers
410 users here now

Rumors, happenings, and innovations in the technology sphere. If it's technological news or discussion of technology, it probably belongs here.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS