C

Caruso

Author and researcher - Cyber Warfare
21 karmaJoined Working (15+ years)
www.insidecyberwarfare.com

Bio

I'm Jeff Caruso, an author and researcher focusing on Cyber Warfare and AI. The third edition of my book "Inside Cyber Warfare" (O'Reilly, 2009, 2011, 2024) will be out this fall. I was a Russia Subject Matter Expert contracted with the CIA's Open Source Center, provided numerous cyber briefings to U.S. government agencies, and I've been a frequent lecturer at the U.S. Air Force Institute of Technology and the U.S. Army War College.

Comments
14

I didn't introduce Crowdstrike as a vulnerability.

The NSA doesn't provide support to U.S. corporations. That's outside of its mandate.

When a lab gets compromised, there will be an investigation and the fault will almost certainly be placed with the lab unless the lab could prove negligence on the part of the cybersecurity company or companies that they contracted with.

Yes, sometimes the liability clauses in contracts are negotiable if the customer is large enough. Often, it is not, as we've seen in the fallout from the recent Crowdstrike blunder that caused worldwide chaos where Crowdstrike has been invoking its EULA provisions re liability being limited to twice what the customer's annual bill was. 

Thank you for your response, @Dan H . I understand that you do not agree with a lot of EA doctrine (for lack of a better word), but that you are a Longtermist, albeit not a "strong axiological longtermist." Would that be a fair statement? 

Also, although it took some time, I've met a lot of scientists working on AI safety who have nothing to do with EA or Longtermism or AI doom scenarios. It's just that they don't publish open letters, create political action funds, or have any funding mechanism similar to OpenPhilanthropy or similarly-minded billionaire donors like Jaan Tallinn and Vitalik Buterin. As a result, there's the illusion that AI Safety is dominated by EA-trained philosophers and engineers. 

In today's Bulletin of the Atomic Scientists is this headline - "Trump has a strategic plan for the country: Gearing up for nuclear war" 

https://thebulletin.org/2024/07/trump-has-a-strategic-plan-for-the-country-gearing-up-for-nuclear-war/

Does EA have a plan to address this? If not, now would be a good time.  

Thank you.

Separately, I just read your executive summary re the nuclear threat; something that I think is particularly serious and worthy of effort. It read to me like the report suggests that there is such a thing as a limited nuclear exchange. If that's correct, I would offer that you're doing more harm than good by promoting that view which unfortunately some politicians and military officers share. 

If you have not yet read, or listened to, Nuclear War: A Scenario by Anne Jacobsen, I highly encourage you to do so. Your budget for finding ways to prevent that from happening would, in my opinion, be well-spent creating condensed versions of what Jacobsen accomplished and making it go viral. You'll understand what I mean once you've consumed her book. It completely changed how I think about the subject. 

Once the genie is out of the bottle, it doesn't matter, does it? Much of China's current tech achievements began with industrial espionage. You can't constrain a game-changing technology while excluding espionage as a factor. 

It's exactly the same issue with AI. 

While you have an interesting theoretical concept, there's no way to derive a strategy from it that would lead to AI safety that I can see. 

A theory of victory approach won't work for AI. Theories of victory are borne out of a study of what hasn't worked in warfare.  You've got nothing to draw from in order to create an actual theory of victory. Instead, you appear to be proposing a few different strategies, which don't appear to be very well thought out.

You argue that the U.S. could have established a monopoly on nuclear weapons development. How?The U.S. lost its monopoly to Russia due to acts of Russian espionage that took place at Los Alamos. How do you imagine that could have been prevented? 

AI is software, and in software security, offense always has the advantage over defense. There is no network that cannot be breached w/ sufficient time and resources because software is inherently insecure. 

I haven't seen the phrase "Advanced Artificial Intelligence" in use before. How does AAI differ from Frontier AI, AGI, and Artificial Superintelligence? 

Fired from OpenAI's Superalignment team, Aschenbrenner now runs a fund dedicated to funding AGI-focused startups, according to The Information. 

"Former OpenAI super-alignment researcher Leopold Aschenbrenner, who was fired from the company for allegedly leaking information, has started an investment firm to back startups with capital from former Github CEO Nat Friedman, investor Daniel Gross, Stripe CEO Patrick Collision and Stripe president John Collision, according to his personal website.

In a recent podcast interview, Aschenbrenner spoke about the new firm as a cross between a hedge fund and a think tank, focused largely on AGI, or artificial general intelligence. “There’s a lot of money to be made. If AGI were priced in tomorrow, you could maybe make 100x. Probably you can make even way more than that,” he said. “Capital matters.”

“We’re going to be betting on AGI and superintelligence before the decade is out, taking that seriously, making the bets you would make if you took that seriously. If that’s wrong, the firm is not going to do that well,” he said."

What happened to his concerns over safety, I wonder? 

Load more