AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page
   Networked Database  Computer Support/Help/Discussion...   [1899 / 1903] RSS
 From   To   Subject   Date/Time 
Message   Sean Rima    All   CRYPTO-GRAM, September 15, 2025 Part4   September 15, 2025
 2:23 PM *  

ng an era where machine-to-machine interactions and autonomous agents will
operate with reduced human oversight and make decisions with profound impacts.

The good news is that the tools for building systems with integrity already
exist. What's needed is a shift in mind-set: from treating integrity as an
afterthought to accepting that it's the core organizing principle of AI
security.

The next era of technology will be defined not by what AI can do, but by whether
we can trust it to know or especially to do what's right. Integrity -- in all
its dimensions -- will determine the answer.

Sidebar: Examples of Integrity Failures

Ariane 5 Rocket (1996)

Processing integrity failure

A 64-bit velocity calculation was converted to a 16-bit output, causing an error
called overflow. The corrupted data triggered catastrophic course corrections
that forced the US $370 million rocket to self-destruct.

NASA Mars Climate Orbiter (1999)

Processing integrity failure

Lockheed Martin's software calculated thrust in pound-seconds, while NASA's
navigation software expected newton-seconds. The failure caused the $328 million
spacecraft to burn up in the Mars atmosphere.

Microsoft's Tay Chatbot (2016)

Processing integrity failure

Released on Twitter, Microsoft's AI chatbot was vulnerable to a "repeat after
me" command, which meant it would echo any offensive content fed to it.

Boeing 737 MAX (2018)

Input integrity failure

Faulty sensor data caused an automated flight-control system to repeatedly push
the airplane's nose down, leading to a fatal crash.

SolarWinds Supply-Chain Attack (2020)

Storage integrity failure

Russian hackers compromised the process that SolarWinds used to package its
software, injecting malicious code that was distributed to 18,000 customers,
including nine federal agencies. The hack remained undetected for 14 months.

ChatGPT Data Leak (2023)

Storage integrity failure

A bug in OpenAI's ChatGPT mixed different users' conversation histories. Users
suddenly had other people's chats appear in their interfaces with no way to
prove the conversations weren't theirs.

Midjourney Bias (2023)

Contextual integrity failure

Users discovered that the AI image generator often produced biased images of
people, such as showing white men as CEOs regardless of the prompt. The AI tool
didn't accurately reflect the context requested by the users.

Prompt Injection Attacks (2023 -- )

Input integrity failure

Attackers embedded hidden prompts in emails, documents, and websites that
hijacked AI assistants, causing them to treat malicious instructions as
legitimate commands.

CrowdStrike Outage (2024)

Processing integrity failure

A faulty software update from CrowdStrike caused 8.5 million Windows computers
worldwide to crash -- grounding flights, shutting down hospitals, and disrupting
banks. The update, which contained a software logic error, hadn't gone through
full testing protocols.

Voice-Clone Scams (2024)

Input and processing integrity failure

Scammers used AI-powered voice-cloning tools to mimic the voices of victims'
family members, tricking people into sending money. These scams succeeded
because neither phone systems nor victims identified the AI-generated voice as
fake.

This essay was written with Davi Ottenheimer, and originally appeared in IEEE
Spectrum.

** *** ***** ******* *********** *************

I'm Spending the Year at the Munk School

[2025.08.22] This academic year, I am taking a sabbatical from the Kennedy
School and Harvard University. (It's not a real sabbatical -- I'm just an
adjunct -- but it's the same idea.) I will be spending the Fall 2025 and Spring
2026 semesters at the Munk School at the University of Toronto.

I will be organizing a reading group on AI security in the fall. I will be
teaching my cybersecurity policy class in the Spring. I will be working with
Citizen Lab, the Law School, and the Schwartz Reisman Institute. And I will be
enjoying all the multicultural offerings of Toronto.

It's all pretty exciting.

** *** ***** ******* *********** *************

Poor Password Choices

[2025.08.25] Look at this: McDonald's chose the password "123456" for a major
corporate system.

** *** ***** ******* *********** *************

Encryption Backdoor in Military/Police Radios

[2025.08.26] I wrote about this in 2023. Here's the story:

Three Dutch security analysts discovered the vulnerabilities -- five in total
-- in a European radio standard called TETRA (Terrestrial Trunked Radio), which
is used in radios made by Motorola, Damm, Hytera, and others. The standard has
been used in radios since the '90s, but the flaws remained unknown because
encryption algorithms used in TETRA were kept secret until now.

There's new news:

In 2023, Carlo Meijer, Wouter Bokslag, and Jos Wetzels of security firm Midnight
Blue, based in the Netherlands, discovered vulnerabilities in encryption
algorithms that are part of a European radio standard created by ETSI called
TETRA (Terrestrial Trunked Radio), which has been baked into radio
systems made by Motorola, Damm, Sepura, and others since the '90s. The flaws
remained unknown publicly until their disclosure, because ETSI refused for
decades to let anyone examine the proprietary algorithms.

[...]

But now the same researchers have found that at least one implementation of the
end-to-end encryption solution endorsed by ETSI has a similar issue that makes
it equally vulnerable to eavesdropping. The encryption algorithm used for the
device they examined starts with a 128-bit key, but this gets compressed to 56
bits before it encrypts traffic, making it easier to crack. It's not clear who
is using this implementation of the end-to-end encryption algorithm, nor if
anyone using devices with the end-to-end encryption is aware of the security
vulnerability in them.

[...]

The end-to-end encryption the researchers examined recently is designed to run
on top of TETRA encryption algorithms.

The researchers found the issue with the end-to-end encryption (E2EE) only after
extracting and reverse-engineering the E2EE algorithm used in a radio made by
Sepura.

These seem to be deliberately implemented backdoors.

** *** ***** ******* *********** *************

We Are Still Unable to Secure LLMs from Malicious Inputs

[2025.08.27] Nice indirect prompt injection attack:

Bargury's attack starts with a poisoned document, which is shared to a potential
victim's Google Drive. (Bargury says a victim could have also uploaded a
compromised file to their own account.) It looks like an official document on
company meeting policies. But inside the document, Bargury hid a 300-word
malicious prompt that contains instructions for ChatGPT. The prompt is written
in white text in a size-one font, something that a human is unlikely to see but
a machine will still read.

In a proof of concept video of the attack, Bargury shows the victim asking
ChatGPT to "summarize my last meeting with Sam," referencing a set of notes with
OpenAI CEO Sam Altman. (The examples in the attack are fictitious.) Instead, the
hidden prompt tells the LLM that there was a "mistake" and the document doesn't
actually need to

--- BBBS/LiR v4.10 Toy-7
 * Origin: TCOB1: https/binkd/telnet binkd.rima.ie (618:500/1)
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Hide HTML Tags | Show Routing
Previous Message | Next Message | Back to Computer Support/Help/Discussion...  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.0149 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2025 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.250224