Friday Squid Blogging: Giant Squid vs. Blue Whale
Sep. 19th, 2025 09:06 pm![[syndicated profile]](https://www.dreamwidth.org/img/silk/identity/feed.png)
A comparison aimed at kids.
A comparison aimed at kids.
The Atlantic Council has published its second annual report: “Mythical Beasts: Diving into the depths of the global spyware market.”
Too much good detail to summarize, but here are two items:
First, the authors found that the number of US-based investors in spyware has notably increased in the past year, when compared with the sample size of the spyware market captured in the first Mythical Beasts project. In the first edition, the United States was the second-largest investor in the spyware market, following Israel. In that edition, twelve investors were observed to be domiciled within the United States—whereas in this second edition, twenty new US-based investors were observed investing in the spyware industry in 2024. This indicates a significant increase of US-based investments in spyware in 2024, catapulting the United States to being the largest investor in this sample of the spyware market. This is significant in scale, as US-based investment from 2023 to 2024 largely outpaced that of other major investing countries observed in the first dataset, including Italy, Israel, and the United Kingdom. It is also significant in the disparity it points to the visible enforcement gap between the flow of US dollars and US policy initiatives. Despite numerous US policy actions, such as the addition of spyware vendors on the Entity List, and the broader global leadership role that the United States has played through imposing sanctions and diplomatic engagement, US investments continue to fund the very entities that US policymakers are making an effort to combat.
Second, the authors elaborated on the central role that resellers and brokers play in the spyware market, while being a notably under-researched set of actors. These entities act as intermediaries, obscuring the connections between vendors, suppliers, and buyers. Oftentimes, intermediaries connect vendors to new regional markets. Their presence in the dataset is almost assuredly underrepresented given the opaque nature of brokers and resellers, making corporate structures and jurisdictional arbitrage more complex and challenging to disentangle. While their uptick in the second edition of the Mythical Beasts project may be the result of a wider, more extensive data-collection effort, there is less reporting on resellers and brokers, and these entities are not systematically understood. As observed in the first report, the activities of these suppliers and brokers represent a critical information gap for advocates of a more effective policy rooted in national security and human rights. These discoveries help bring into sharper focus the state of the spyware market and the wider cyber-proliferation space, and reaffirm the need to research and surface these actors that otherwise undermine the transparency and accountability efforts by state and non-state actors as they relate to the spyware market.
Really good work. Read the whole thing.
This is a nice piece of research: “Mind the Gap: Time-of-Check to Time-of-Use Vulnerabilities in LLM-Enabled Agents“.:
Abstract: Large Language Model (LLM)-enabled agents are rapidly emerging across a wide range of applications, but their deployment introduces vulnerabilities with security implications. While prior work has examined prompt-based attacks (e.g., prompt injection) and data-oriented threats (e.g., data exfiltration), time-of-check to time-of-use (TOCTOU) remain largely unexplored in this context. TOCTOU arises when an agent validates external state (e.g., a file or API response) that is later modified before use, enabling practical attacks such as malicious configuration swaps or payload injection. In this work, we present the first study of TOCTOU vulnerabilities in LLM-enabled agents. We introduce TOCTOU-Bench, a benchmark with 66 realistic user tasks designed to evaluate this class of vulnerabilities. As countermeasures, we adapt detection and mitigation techniques from systems security to this setting and propose prompt rewriting, state integrity monitoring, and tool-fusing. Our study highlights challenges unique to agentic workflows, where we achieve up to 25% detection accuracy using automated detection methods, a 3% decrease in vulnerable plan generation, and a 95% reduction in the attack window. When combining all three approaches, we reduce the TOCTOU vulnerabilities from an executed trajectory from 12% to 8%. Our findings open a new research direction at the intersection of AI safety and systems security.
Vulnerabilities in electronic safes that use Securam Prologic locks:
While both their techniques represent glaring security vulnerabilities, Omo says it’s the one that exploits a feature intended as a legitimate unlock method for locksmiths that’s the more widespread and dangerous. “This attack is something where, if you had a safe with this kind of lock, I could literally pull up the code right now with no specialized hardware, nothing,” Omo says. “All of a sudden, based on our testing, it seems like people can get into almost any Securam Prologic lock in the world.”
[…]
Omo and Rowley say they informed Securam about both their safe-opening techniques in spring of last year, but have until now kept their existence secret because of legal threats from the company. “We will refer this matter to our counsel for trade libel if you choose the route of public announcement or disclosure,” a Securam representative wrote to the two researchers ahead of last year’s Defcon, where they first planned to present their research.
Only after obtaining pro bono legal representation from the Electronic Frontier Foundation’s Coders’ Rights Project did the pair decide to follow through with their plan to speak about Securam’s vulnerabilities at Defcon. Omo and Rowley say they’re even now being careful not to disclose enough technical detail to help others replicate their techniques, while still trying to offer a warning to safe owners about two different vulnerabilities that exist in many of their devices.
The company says that it plans on updating its locks by the end of the year, but have no plans to patch any locks already sold.
Senator Ron Wyden has asked the Federal Trade Commission to investigate Microsoft over its continued use of the RC4 encryption algorithm. The letter talks about a hacker technique called Kerberoasting, that exploits the Kerberos authentication system.
Attaullah Baig, WhatsApp’s former head of security, has filed a whistleblower lawsuit alleging that Facebook deliberately failed to fix a bunch of security flaws, in violation of its 2019 settlement agreement with the Federal Trade Commission.
The lawsuit, alleging violations of the whistleblower protection provision of the Sarbanes-Oxley Act passed in 2002, said that in 2022, roughly 100,000 WhatsApp users had their accounts hacked every day. By last year, the complaint alleged, as many as 400,000 WhatsApp users were getting locked out of their accounts each day as a result of such account takeovers.
Baig also allegedly notified superiors that data scraping on the platform was a problem because WhatsApp failed to implement protections that are standard on other messaging platforms, such as Signal and Apple Messages. As a result, the former WhatsApp head estimated that pictures and names of some 400 million user profiles were improperly copied every day, often for use in account impersonation scams.
This is a current list of where and when I am scheduled to speak:
The list is maintained on this page.
Nondestructive detection of multiple dried squid qualities by hyperspectral imaging combined with 1D-KAN-CNN
Abstract: Given that dried squid is a highly regarded marine product in Oriental countries, the global food industry requires a swift and noninvasive quality assessment of this product. The current study therefore uses visiblenear-infrared (VIS-NIR) hyperspectral imaging and deep learning (DL) methodologies. We acquired and preprocessed VIS-NIR (4001000 nm) hyperspectral reflectance images of 93 dried squid samples. Important wavelengths were selected using competitive adaptive reweighted sampling, principal component analysis, and the successive projections algorithm. Based on a Kolmogorov-Arnold network (KAN), we introduce a one-dimensional, KAN convolutional neural network (1D-KAN-CNN) for nondestructive measurements of fat, protein, and total volatile basic nitrogen….
Interesting analysis:
When cyber incidents occur, victims should be notified in a timely manner so they have the opportunity to assess and remediate any harm. However, providing notifications has proven a challenge across industry.
When making notifications, companies often do not know the true identity of victims and may only have a single email address through which to provide the notification. Victims often do not trust these notifications, as cyber criminals often use the pretext of an account compromise as a phishing lure.
[…]
This report explores the challenges associated with developing the native-notification concept and lays out a roadmap for overcoming them. It also examines other opportunities for more narrow changes that could both increase the likelihood that victims will both receive and trust notifications and be able to access support resources.
The report concludes with three main recommendations for cloud service providers (CSPs) and other stakeholders:
- Improve existing notification processes and develop best practices for industry.
- Support the development of “middleware” necessary to share notifications with victims privately, securely, and across multiple platforms including through native notifications.
- Improve support for victims following notification.
While further work remains to be done to develop and evaluate the CSRB’s proposed native notification capability, much progress can be made by implementing better notification and support practices by cloud service providers and other stakeholders in the near term.
A couple of months ago, a new paper demonstrated some new attacks against the Fiat-Shamir transformation. Quanta published a good article that explains the results.
This is a pretty exciting paper from a theoretical perspective, but I don’t see it leading to any practical real-world cryptanalysis. The fact that there are some weird circumstances that result in Fiat-Shamir insecurities isn’t new—many dozens of papers have been published about it since 1986. What this new result does is extend this known problem to slightly less weird (but still highly contrived) situations. But it’s a completely different matter to extend these sorts of attacks to “natural” situations.
What this result does, though, is make it impossible to provide general proofs of security for Fiat-Shamir. It is the most interesting result in this research area, and demonstrates that we are still far away from fully understanding what is the exact security guarantee provided by the Fiat-Shamir transform.
Just a few months after Elon Musk’s retreat from his unofficial role leading the Department of Government Efficiency (DOGE), we have a clearer picture of his vision of government powered by artificial intelligence, and it has a lot more to do with consolidating power than benefitting the public. Even so, we must not lose sight of the fact that a different administration could wield the same technology to advance a more positive future for AI in government.
To most on the American left, the DOGE end game is a dystopic vision of a government run by machines that benefits an elite few at the expense of the people. It includes AI rewriting government rules on a massive scale, salary-free bots replacing human functions and nonpartisan civil service forced to adopt an alarmingly racist and antisemitic Grok AI chatbot built by Musk in his own image. And yet despite Musk’s proclamations about driving efficiency, little cost savings have materialized and few successful examples of automation have been realized.
From the beginning of the second Trump administration, DOGE was a replacement of the US Digital Service. That organization, founded during the Obama administration to empower agencies across the executive government with technical support, was substituted for one reportedly charged with traumatizing their staff and slashing their resources. The problem in this particular dystopia is not the machines and their superhuman capabilities (or lack thereof) but rather the aims of the people behind them.
One of the biggest impacts of the Trump administration and DOGE’s efforts has been to politically polarize the discourse around AI. Despite the administration railing against “woke AI”‘ and the supposed liberal bias of Big Tech, some surveys suggest the American left is now measurably more resistant to developing the technology and pessimistic about its likely impacts on their future than their right-leaning counterparts. This follows a familiar pattern of US politics, of course, and yet it points to a potential political realignment with massive consequences.
People are morally and strategically justified in pushing the Democratic Party to reduce its dependency on funding from billionaires and corporations, particularly in the tech sector. But this movement should decouple the technologies championed by Big Tech from those corporate interests. Optimism about the potential beneficial uses of AI need not imply support for the Big Tech companies that currently dominate AI development. To view the technology as inseparable from the corporations is to risk unilateral disarmament as AI shifts power balances throughout democracy. AI can be a legitimate tool for building the power of workers, operating government and advancing the public interest, and it can be that even while it is exploited as a mechanism for oligarchs to enrich themselves and advance their interests.
A constructive version of DOGE could have redirected the Digital Service to coordinate and advance the thousands of AI use cases already being explored across the US government. Following the example of countries like Canada, each instance could have been required to make a detailed public disclosure as to how they would follow a unified set of principles for responsible use that preserves civil rights while advancing government efficiency.
Applied to different ends, AI could have produced celebrated success stories rather than national embarrassments.
A different administration might have made AI translation services widely available in government services to eliminate language barriers to US citizens, residents and visitors, instead of revoking some of the modest translation requirements previously in place. AI could have been used to accelerate eligibility decisions for Social Security disability benefits by performing preliminary document reviews, significantly reducing the infamous backlog of 30,000 Americans who die annually awaiting review. Instead, the deaths of people awaiting benefits may now double due to cuts by DOGE. The technology could have helped speed up the ministerial work of federal immigration judges, helping them whittle down a backlog of millions of waiting cases. Rather, the judicial systems must face this backlog amid firings of immigration judges, despite the backlog.
To reach these constructive outcomes, much needs to change. Electing leaders committed to leveraging AI more responsibly in government would help, but the solution has much more to do with principles and values than it does technology. As historian Melvin Kranzberg said, technology is never neutral: its effects depend on the contexts it is used in and the aims it is applied towards. In other words, the positive or negative valence of technology depends on the choices of the people who wield it.
The Trump administration’s plan to use AI to advance their regulatory rollback is a case in point. DOGE has introduced an “AI Deregulation Decision Tool” that it intends to use through automated decision-making to eliminate about half of a catalog of nearly 200,000 federal rules . This follows similar proposals to use AI for large-scale revisions of the administrative code in Ohio, Virginia and the US Congress.
This kind of legal revision could be pursued in a nonpartisan and nonideological way, at least in theory. It could be tasked with removing outdated rules from centuries past, streamlining redundant provisions and modernizing and aligning legal language. Such a nonpartisan, nonideological statutory revision has been performed in Ireland—by people, not AI—and other jurisdictions. AI is well suited to that kind of linguistic analysis at a massive scale and at a furious pace.
But we should never rest on assurances that AI will be deployed in this kind of objective fashion. The proponents of the Ohio, Virginia, congressional and DOGE efforts are explicitly ideological in their aims. They see “AI as a force for deregulation,” as one US senator who is a proponent put it, unleashing corporations from rules that they say constrain economic growth. In this setting, AI has no hope to be an objective analyst independently performing a functional role; it is an agent of human proponents with a partisan agenda.
The moral of this story is that we can achieve positive outcomes for workers and the public interest as AI transforms governance, but it requires two things: electing leaders who legitimately represent and act on behalf of the public interest and increasing transparency in how the government deploys technology.
Agencies need to implement technologies under ethical frameworks, enforced by independent inspectors and backed by law. Public scrutiny helps bind present and future governments to their application in the public interest and to ward against corruption.
These are not new ideas and are the very guardrails that Trump, Musk and DOGE have steamrolled over the past six months. Transparency and privacy requirements were avoided or ignored, independent agency inspectors general were fired and the budget dictates of Congress were disrupted. For months, it has not even been clear who is in charge of and accountable for DOGE’s actions. Under these conditions, the public should be similarly distrustful of any executive’s use of AI.
We think everyone should be skeptical of today’s AI ecosystem and the influential elites that are steering it towards their own interests. But we should also recognize that technology is separable from the humans who develop it, wield it and profit from it, and that positive uses of AI are both possible and achievable.
This essay was written with Nathan E. Sanders, and originally appeared in Tech Policy Press.