23 Comments
User's avatar
Daniel Meegan's avatar

Capital research center.com search Media vs America

Daniel Meegan's avatar

It will be used as they intended it never mind law in a lawless society influencewatch.org search Media vs America

Daniel Meegan's avatar

Biden was a prop used to install nefarious creatures

James Arthur's avatar

Yep, changing the terms of a contract after it is formed and partly performed is called a Breach. If Anthropic canโ€™t or wonโ€™t cure it, I say round up the executives and board members, ship them out to a front, strap them to the front of some tanks and see how fast they get get religion!

Richard Luthmann's avatar

This isnโ€™t a grad school seminar. Itโ€™s war. If you sell battlefield tools to the United States and then try to veto their lawful use mid-conflict, youโ€™re not a conscience โ€” youโ€™re a liability. The War Powers structure puts authority where it belongs: elected leadership accountable to voters. If a contractor deliberately restricts or sabotages systems in a way that endangers American forces, prosecutors donโ€™t need philosophy โ€” they need indictments. You donโ€™t get to take defense money and then play Commander in Chief from Silicon Valley. If you canโ€™t stomach the mission, donโ€™t bid the contract. But donโ€™t endanger American lives and call it virtue.

Tony Brasunas's avatar

Wow, this is a new low.

Look, we don't want killer robots or mass AI surveillance. We don't want this. Do you?

This argument is weak, weak, weak. It's biased and tendentious. And it's anti free market.

Say it with me: If you don't want a product, don't buy it.

Anthropic is saying our product will allow you do all kinds of things but it won't allow you to do Super Evil Sh*t.

And then the govt says "but, but, but what if we want to do Super Evil Sh*t?"

And Anthropic says, no, you can't do that with our software.

If you don't want this product, you don't buy it.

The gov't has two choices. Agree NOT to do Super Evil Sh*t, or go find a different product.

They can go see if someone else is willing to enable them to do Super Evil Sh*t. Personally, I hope that no AI company enables our govt to do Super Evil Sh*t, but we'll have to see.

And here comes AMUSE arguing and screaming: you have to let our govt do Super Evil Sh*t.

Why?

At the very least, Anthropic should be applauded for having the spinal integrity to do this rather than be venal or evil.

But AMUSE just cant bring himself to rational thought for some reason, probably because Trump wants to do Super Evil Sh*t. I've watched this substack go from brilliant and uncompromised prior to Trump's election to full disingenuous biased Trump cheerleader. I imagine AMUSE is aiming for a position in the administration. That makes the most sense. Or, help me out, has this substack ever been even mildly critical of Trump on anything?

c Anderson's avatar

Under your definition of โ€œsuper evil,โ€ the US may not have dropped atomic bombs on the Japanese, and would have cost many more American and Japanese lives in a prolonged war. Dario Amodei wants the US to accept an unknown Ai generated concept of appropriate use of force.

Tony Brasunas's avatar

I really hope humanity can think more deeply than this. Building and selling bombs and tanks is fundamentally different from unleashing an Orwellian dystopia of killer robots and global surveillance.

(ps: read your history. That war was already won. We dropped those a bombs in Hiroshima and Nagasaki to scare the world and initiate global hegemony. )

cat's avatar
Feb 26Edited

It depends on how the contract and its Statement of Work and other documents were written but Amuse's take is likely spot on. A company can choose to bid or not bid on a Request for Proposal, and at that time, if it wants, base its bid price on a change or limitation being acceptable to the government. Its bid and any terms it wants are compared with other bidders. Negotiation takes place, which is another time that a company may try to get something changed or clarified. After contract award, that same company cannot dictate terms after the fact.

And it's common sense that contracting with the Department of WAR isn't the same as contracting with Wal-Mart.

Tony Brasunas's avatar

If you're appealing to common sense, I'd say that it's common sense that creating a potential Orwellian dystopian hell and making tanks are not the same thing.

Suzie's avatar

While I generally agree with this analysis, one can also remember how the Biden administration justified using technology in any number of ways AGAINST their US opposition.

Imagining the AI-related technology designed to fight US foreign enemies being turned against the American people by future bad-acting party administrations is clearly not out of the realm of possibility. Yes, it would be an unlawful act, but did that even slightly discourage the Biden administration from doing it anyway? It sure did not. And as far as I can see, to this day, no one has been held accountable for those blatantly unconstitutional acts.

The law is only as powerful as those willing to uphold it. The democrats believe it to be โ€œoptionalโ€.

c Anderson's avatar

This is a ridiculous argument for weapons that will be used to protect Americans, and US interests. This contractor, Anthropic, wants his Ai to make value judgments in the field that military analysts would normally make. That would be like a physician using Ai in a surgical procedure letting Ai make life and death decisions. ๐Ÿ™„

Jerri Hinojosa's avatar

Another analogy would be that when a painter is commissioned to do a portrait or a writer is commissioned to ghost write a book, the work product (and any potential uses) becomes the property of the person who commissioned it.

The contractor is not the only party with agency in this situation. Although Anthropic may be ahead in the AI game at the moment, if the Defense Dept blackballed them because of their demands, 2 things would happen. First, shareholders or investors would exert pressure on the Board to adjust to industry norms for defense contractors. There are not that many customers for AI guided weaponry and I suspect none of those customers would be ok with giving Anthropic veto power. Second, competitors would soon catch up if they had surplus R&D money from lucrative defense contracts and Anthropic did not. Is this problem not solved by normal market forces?

Suzie's avatar

Is it or is it not, already in established law that the AI or any weaponry cannot be used against Americans? Say the Republican Party is deemed by some future (God forbid) Democrat administration to be illegal, (wouldnโ€™t put it past them), deeming them a โ€œdomestic enemyโ€. Then all bets are off.

c Anderson's avatar

What are you talking about? We have this thing called the judicial process that makes People accountable for their actions. Guns donโ€™t shoot people, people shoot people.

Suzie's avatar

Yeah. And that judicial process is working out so well for us. ๐Ÿ™„

c Anderson's avatar

Sue, there is no better system than the US legal system.

Suzie's avatar

Yes when applied faithfully.

The Left in this country has zero interest in what the law says when it comes to achieving their own ends.

We have multiple states now engaged in not-so- borderline insurrection against the federal government and rogue judges letting criminals walk free. The legal system is only as good as those who uphold it, and the Left act as if itโ€™s optional.

c Anderson's avatar

Are you suggesting conservatives turn into vigilantes like the brain dead, whistle blowing, Democrat mob activists in Minnesota? There is a reason the judicial process is slow. When it is fast, mistakes can be made.

John Wygertz's avatar

What we saw at the State of the Union and in the responses is that about half the country wouldn't fight for the country for one reason or another. The Anthropic situation is a reflection of that, and it's another sad reminder of the loss of patriotism across our entire nation.