Having played around with this a bit, I recommend using https://github.com/CadQuery/cadquery as the CAD language instead. I'm pretty sure it could even transpile to OnShape / Solidworks models as well, though it might require some funky hacks with their extensions frameworks
The Sinophobic culture at Anthropic is worrying. Say what you will about authoritarianism, but China’s non-imperialist foreign policy means their economy is less reliant on a military-industrial complex.
All they have to do is continue to pump out exponentially more solar panels and the petrodollar will fall, possibly taking our reserve currency status with it. The U.S. seems more likely to start a hot war in the name of “democracy” as it fails to gracefully metabolize the end of its geopolitical dominance, and Dario’s rhetoric pushes us further in that direction.
Look. I think the Chinese AI companies are doing a lot of good. I'm glad they exist. I'm glad they're relatively advanced. I don't think the entire nation of China is a bunch of villains. I don't think the US, even before the current era, is a bunch of do-gooders.
But China has some of the most imperialist policies in the world. They are just as imperialist as Russia or America. Military contracts are still massive business.
I also believe the petrodollar will fall, but it isn't going to be because China built exponentially more solar panels.
That's different to the expansionist imperial policies of Spain in the 1500s or Britain in the 1700s. It also affects a very large proportion of the world's population. That Wikipedia page has some good links for further reading about this.
But it's an important point when considering China's place in the world.
We're talking about the modern world, though. China's imperialism over the past half century is not significantly different from any other major world power. The choices we have aren't 1500s Spain or 1700s Britain vs. 2000s China.
And Belt and Road is the Marshall plan writ large, and it was considered to be one of the largest imperialist plans ever by the USA, and B&R covers many many countries outside of that map. You'll notice all of these loans they've offered have very favorable terms for them - it's arguably many times more exploitative than the Marshall plan.
> Educate me please with a comparison of what China has done to be "some of the most imperialist policies"?
Tibet occupation. Taiwan encirclement and ongoing military exercises. Strong-arming African and Asian countries that made the mistake of signing up for belt & road. Tianenmen Square. Illegal Foreign Police Stations. Uyghurs/Xinjiang genocide and concentration camps. Repeated invasion and occupation of Indian territory in North East and North West. The Great Firewall of China - occupation and suppression of its own populations. Ongoing Han settlement of Tibet, Xinjiang and other ethnic regions. Violent destruction of Hong Kong democracy (that was condition of handover). Spratly Islands occupation. Attacks on Filipino shipping and coast guard. Ongoing attacks on Japan's Senkaku Islands.
Tibet
Hong Kong / Macau
Taiwan
Everything constantly in the South China Sea
Belt and Roads is effectively the Marshall Plan but even bigger - Africa being the major example, but also Eastern Europe, parts of the middle east, etc. Over 100 countries. This exact playbook is what sets up the infrastructure and reasons for military intervention at a later date - protecting your investments.
For example, China operates 1 foreign military base, in Djibouti. How many do you think the U.S. has in the South China Sea alone?
Beyond that, how many people has China killed in foreign military conflicts in the past 40 years? How many foreign governments have they overthrown?
Instead of all this, they’ve used their resources not only to become the world’s economic superpower but also to lift 800 million people out of poverty, accounting for 75% of the world’s reduction during the past 4 decades. The U.S. has added 10 million during that same time period.
First off, I consider the post-Mao / starting with Deng era of Chinese government to be the most relevant when considering who they “are” as a country now.
However, I’d still maintain that before that, China’s foreign policy was more focused on maintaining territorial sovereignty against the threat of Western imperialism vs. focused on expansion or foreign influence: https://en.wikipedia.org/wiki/History_of_foreign_relations_o...
Meanwhile, the entire territory of the U.S. is predicated on one of history’s largest genocides, and a consistently expansionary foreign policy on top of that.
Tibet, the Philippines, and Taiwan would like to have a word, not to mention Chinese military action in support of its North Korea puppet state, and wars with Vietnam and India.
Are you serious? Don't you know how many wars did China wage? It tried to assimilate Vietnam for 1000 years. The last large scale war against Vietnam was just 1979. In fact, China had started war with all its neighbors, with no exception.
The one we live in, where they have control over a wide swathe of land mass through imperialism and have actively resisted relinquishing it?
The one we live in, where they are constantly surpassing international law in international waters in the South China Sea?
The one we live in, where they are constantly rattling sabers at South Korea and Japan when it comes to military expansion?
The one we live in, where they brutally cracked down on Hong Kong when they did not abide by the 50 year one country two systems deal, not even making it half of the way through the agreed period?
The one we live in, where there is constant threat to Taiwan?
It may have been a lazy post you're responding to, but anyone that is paying attention to this topic enough to talk about it is going to either say 'Of course China is imperialist, the same as every other global power' or take some sort of tankie approach to justify it.
I'm well informed on all of these but no, if we compare to other global power like US or Russia, or historically British, France, Spain, etc, China is 100% not an imperialist or colonialist, not by a large margin. Those issues are largely exaggerated by media and anyone had a decent exposure to history and international politics wouldn't say they are the same.
Sure China has some disputes with neighboring country in South China Sea, the worst conflict they had is fishing boats running into each other. 0 death toll last time I checked.
Meanwhile US killed at least 126 people with alleged drug strike in the Caribbean Sea since last year, WITHOUT trial.
Anyone believing these're equivalent imperialism activity is hypocrite at best.
What China is doing in the South China Sea? The South China Sea.
Let's just compare to the Monroe Doctrine [1]. What this actually means has gone through several iterations by since I think Teddy Roosevelt's time, it's that the United States views the Americas (being North and South America) to be the sole domain of the United States.
This was a convenient excuse for any number of regime changes in Central and South America since 1945. The US almost started World War Three over Cuba in 1962 after the USSR retaliated to the US putting nuclear MRBMs in Turkey. We've starved Cuba for 60+ years for having the audacity to overthrow our puppet government and nationalize some mob casinos. Recently, we kidnapped the head of state of Venezuela because reasons.
But sure, let's focus on China militarizing its territorial waters.
You're arguing that because of the English language name of it is the South China Sea that China owns it and their actions can't be imperialist?
Brunei, Malaysia, Indonesia, Vietnam, the Philippines, Taiwan, and Vietnam will all be happy to know that we've solved it - we can just abandon it all to China. Problem solved!
This is a silly argument. There are significant territorial disputes that China is extremely aggressive on, international tribunals have ruled them as violating international law in international waters and in sovereign waters of other nations, etc.
And the US just casually carried out a special military operation in another sovereign country and captured their president without consequences. So much for self-righteous.
Obviously self defense with nobel peace price worthy restraint.
Considering it's PRC claimed territory. Literally 100% of PRC claims are inherited from ROC, i.e. PRC has expanded no claims, and actively settled 12/14 land borders (most on earth) essentially all with 50%+ concessions, i.e. PRC ceded more land in negotiations. That OBJECTIVELY, makes PRC the most benevolent rising power in recorded history. Any gov losing land to so many border settlements is committing treason. Also note PCA ruling is not international law, so what PRC does in SCS is not even legally wrong (as in they legally can't be wrong since UNCLOS cannot rule on sovereignty). Or that PRC was last to militarize SCS islands (except Brunai who is good boi), and PRC conceded ROC/TW's original 11dash to 9dash, which even in SCS disputes makes PRC the only party to have made concessions.
PRC is objectively the LEAST imperialistic rising power, by actual non retarded definitions, i.e. expanding on territories outside it's claims, that PRC didn't even make, but again inherited from ROC when UN recognition changed.
“One country two systems” is definitionally not imperialism, and given that “One China” is still an internationally recognized thing, neither is Taiwan. “Imperialism” is not a synonym for “morally repugnant government policy”.
I can see the argument for Hong Kong. I don't agree, really, but I can understand it. Under the strictest of definitions, perhaps it isn't.
But Taiwan is very obviously a totally separate country no matter what fictions anyone employs. If you are trying to talk about the thin veneer of everyone going "Uh huh, sure, China, yep Taiwan is totally part of you, wink wink, nudge nudge" as somehow making China not imperialist when Taiwan basically lives under the perpetual threat of a Chinese military invasion and having their own democratic form of government overthrown and replaced with the CCP, then... I don't really know what to say.
I suppose we could argue about imperialism being more of an economic thing - in which case this all still holds up - China's investments in Africa are effectively the same playbook the US has run out in developing nations for years. The US learned it from prior imperialist nations but belts and roads is nearly a carbon copy of what the US has done in other places.
But let's look at what the original poster was actually talking about - saying that China is safe because they don't have a military industrial complex because they're not imperialist. The proper word to use, if we want to get down to the semantics of it all, would be expansionist - but it's still not true. China has the 2nd largest military industrial complex in the world, and the gap is shrinking every day between them and the US. And if you were to look at wartime capacity, where China's dual-use shipyards could be swapped to naval production instead of commercial, a huge portion of that gap disappears immediately.
I think the part about China is just about projecting alignment with the USG in hopes that this will result in Anthropic being treated more favourably by the current administration.
Taiwan is a matter of perspective. From the Chinese perspective, there was a civil war and the KMT lost. That's also the official position of the US, the EU and most countries in the world. It's called the One China policy. And China seems happy to maintain the status quo and leave the situation unresolved. Is it really imperialism to say that ultimately there will be reunification?
Even if you accept Tibet as imperialist, which is debatable, it was in 1950. You want to compare that to US imperialism, particularly since WW2 [1]? And I say "debatable" here because Tibet had a system that is charitably called "serfdom" where 90% of people couldn't own land but they did have some rights. However, they were the property of their lords and could be gifted or traded, you know, like property. There's another word for that: slavery.
It is 100% factually accurate to say that the People's Republica of China is not imperialist.
This is the China that is not only threatening to invade Taiwan but doing live fire exercises around the island and threatening and attempting to coerce Japan for suggesting saying it will go to its defense.
It wasn't that long ago that Taiwan claimed to be the legitimate government of China; given that China still maintains the reverse claim, it's not outrageous that it would consider an outside country's defense to be interference in an internal matter.
Whether or not that claim is legitimate, it is consistent with the concept of china having a non-imperialist foreign policy, and claims regarding that need to look elsewhere for supporting evidence.
While that rhetoric makes sense in the context of the history and politics of China and Taiwan, they have been independently governed nations for quite a while and have very different political systems, their own armies, etc. They are de-facto separate nations if nothing else.
I also note China's aggressive and violent colonization and expansive claims of the South China Sea.
Taking any nation/land/sea by force is imperialist, by definition.
taiwan saying otherwise would immediately trigger an attack from the PRC.
its still imperialism that china is dominating a neighbor to require it ro state a certain position, especially when its very far from the defacto reality on the ground, that taiwan is clearly separate
You know who else considers Taiwan to be part of the People's Republic of China? The US, the EU and in fact most countries in the world. It's called the One China policy. There are I believe 12 countries that have diplomatic relations with Taiwan.
The position of the PRC is that Taiwan will ultimately be reunified. That doesn't necessarily mean by military force. It doesn't even necessarily mean soon. The PRC famously takes a very long term view.
And those islands you mention are in the South China Sea.
> The solution to this problem, he says, is artificial intelligence. The book offers a short guide to building a “target machine,” similar in description to Lavender, based on AI and machine-learning algorithms. Included in this guide are several examples of the “hundreds and thousands” of features that can increase an individual’s rating, such as being in a Whatsapp group with a known militant, changing cell phone every few months, and changing addresses frequently.
The short-term effect is a harbinger of the long-term risk, since capitalism doesn’t inherently care for people who don’t provide economic value. Once superintelligent AI arises, none of us will have value within this system. Even the largest current capital holders will have a hard time holding on to it with an enormous intelligence disadvantage. The logical endpoint is the subjugation or elimination of our species, unless we find a new economic system with human value at its core.
There are a lot of assumptions going on here. One of them is that superintelligent AI will arise. We have no reason to believe this will happen in our lifetimes. I posit that we are about as close to superintelligent AI as James Watt was to nuclear fusion.
The other assumption is that wealth and power are distributed according to intelligence. This is obviously false, wealth and power are largely distributed according to who you or your father plays golf with. As long as AIs don't play golf and don't have fathers, we are quite safe.
> There are a lot of assumptions going on here. One of them is that superintelligent AI will arise. We have no reason to believe this will happen in our lifetimes. I posit that we are about as close to superintelligent AI as James Watt was to nuclear fusion.
This is a perfectly reasonable response if nobody is trying to build it.
Given people are trying to build it, what's the expected value from ignoring the problem? E($Damage_i) = P(BadOutcome_i) * $Damage_i.
$Damage can be huge (there are many possible bad outcomes of varying severity and probability, hence the subscript), which means that at the very least we should try to get a good estimate for P(…) so we know which problems are most important. In addition to it being bad to ignore real problems, it is also bad to do a Pascal's Mugging on ourselves just because we accidentally slipped a few decimal points in our initial best-guess, especially as we have finite capacity ourselves to solve problems.
Finally, let's assume you're right, that we're centuries off at least, and that all the superintelligent narrow that AI we've already got some examples of involve things that can't be replicated in any areas that pose any threat. How long would it take to solve alignment? Is that also centuries off? We've been trying to align each other since laws were written like 𒌷𒅗𒄀𒈾 at least, and the only reason I'm not giving an even older example is that this is the oldest known written form to have survived, not that we weren't doing it before then.
> The other assumption is that wealth and power are distributed according to intelligence. This is obviously false, wealth and power are largely distributed according to who you or your father plays golf with. As long as AIs don't play golf and don't have fathers, we are quite safe.
Nepotism helps, but… huh, TIL that nobody knows who was the grandfather of one of the world's most famous dictators.
Cronyism is a viable alternative for a lot of power-seekers.
So I propose the Musk supremacy criterion to be the following.
Suppose that a wealthy and powerful human (such as Elon Musk) were to suddenly obtain the exact same sinister goals as the hypothetical superintelligent AI in question. Suppose further that this human was able to convince/coerce/bribe another N (say 1000) humans to follow his bidding.
A BadOutcome is said to be MuskSupreme if it could be accomplished by the superintelligent AI, but not by the suddenly-evil Musk and his accomplices.
Obviously[citation needed] it is only the MuskSupreme BadOutcomes we care about. Do there exist any?
For example 1000 people — but only if you get to choose who — is sufficient to take absolute control of both the US congress and the Russian State Duma (or a supermajority of those two plus the Russian Federation Council), which gives them the freedom to pass arbitrary constitutional amendments… so your scenario includes "gets crowned King of the USA and Russia, 90% of the global nuclear arsenal is now their personal property" as something we don't care about.
> As long as AIs don't play golf and don't have fathers, we are quite safe.
Until it becomes 'who you exchange bytes most efficiently with" and all humans are at a disadvantage against a swarm of even bellow average intelligence AGI agents.
Because, as unlikely as it is, if we're discussing risk scenarios for AI getting out of hand. Well then a monolithic superintelligence is just one of the possibilities. What about a swarm of dumb AIs that are nonetheless capable of reasoning and decision making and they become a threat?
That's pretty much what we did. There's no super intelligent monkey in charge. As much as some have tried to pretend, material or otherwise. There's just billions of average intelligence monkeys and we overran all Earth's ecosystems in a matter of centuries. Which is neither trivial nor fully explained yet.
The difference is that we have 100% complete control of these AIs. We can just go into the power grid substation next to the data center and throw the big breaker, and the AI ceases to exist.
When humans developed, we did not displace an external entity that had created us and that had complete power to kill us all in an instant.
Look at the measures that were implemented during covid. Many of them were a lot more extreme than shutting down datacentres, yet they were aimed to mitigate a risk far less than "existential".
That data is in fact orthogonal to my point, for two reasons:
1. When we are talking about wealth and power that actually can influence the quality of the lives of many other people, we are talking about way less than 0.01% of the population. Those people aren't covered in this survey, and even if they were it would be impossible to identify on an axis spanning 0-100%.
2. Your linked article talks about income. People with significant wealth and power frequently have ordinary or below-ordinary income, for tax reasons.
Actually, it will have the opposite effect, at least in the short term.
People who own high value assets (everything from land to the AI) will continue to own them and there will be no opportunities for people to earn their way up (because they can be replaced by AI).
"The logical endpoint is the subjugation or elimination of our species"
Possibly, but it would be by our species (those who own and control the AI) rather than by the AI.
I would venture to say that transhumanism will be the path and goal of the capital class, as that will be a tangible advantage potentially within their grasp.
I suppose then that they would become “homo sapiens sapiens sapiens” or some other similarly hubris laden label, and go on to abandon, dominate or subjugate the filthy hordes of mere Homo sapiens sapiens.
As Antonio Gramsci said: “Pessimism of the intellect, optimism of the will.”
The forces of blind or cynical techno-optimism accelerating capitalism may feel insurmountable, but the future is not set in stone. Every day around the world, people in seemingly hopeless circumstances nevertheless devote their lives to fighting for what they believe in, and sometimes enough people do this over years or even decades that there’s a rupture in oppressive systems and humanity is forever changed for the better. We can only strive to live by our most deeply held values, within the circumstances we were placed, so that when we look back at the end of our lives we can take comfort in the fact that we did the best we could, and just maybe this will be enough to avert the inevitable.
> The expert believes that “asking for regulations because of fear of superhuman intelligence is like asking for regulation of transatlantic flights at near the speed of sound in 1925.”
This assessment of the timeline is quite telling. If supersonic flight posed an existential threat to humanity, we certainly should have been thinking about how to mitigate it in 1925.
1925 of course would have been a great time to put limits on fossil fuel use in aviation along with the rest of the fossil fuel applications to manage the biggest current threat to human civilization. (Arrhenius did the science showing global warming in 1896 or so)
Given the dual use of fossil fuels between military and civilian purposes, I wonder whether any state that deliberately handicapped car/aero/petrochemicals would’ve been able to survive the early twentieth century.
Both the USA and Nazi Germany benefited massively from have a civilian industrial base that was complementary to military production.
Of course you could also argue that Germany wouldn't have had the early successes in war, (if they had even started it). Or at a third juncture, would have fared worse against USSR.
There's a book called Freedom's Forge that I'm a fan of, it makes the argument that the Auto Industry (And assembly lines, mechanization in general) were the single most important reason the Allies won WWII. In fact all the big auto manufacturers of the time retooled their assembly lines to build tanks and airplanes. It's conceivable that if we never mass produced cars, the US wouldn't have had the capability to win the war.
Would you shut down the powerhouse of our economy -- travel, transportation, energy -- for something hypothetical that hasn't even happened and doesn't appear to be close to happening?
I'm pro-clean energy, but you can't do without fossil fuels. Not if you want society to keep climbing up and up and up.
A thing does not require intent or consciousness to be dangerous. How many chemists have blown themselves up because they didn't realize an experiment was dangerous? How many production systems have crashed because the developer didn't accurately predict what the code they wrote will do?
Alkali metals and C++ code do not require ill intent, but they will still obliterate your limbs / revenue if you build and use them wrong.
One of my more tangible hypotheses is a sort of runaway effect. Economic, geopolitical, and military competitive pressures will quickly push out anyone and anything that still relies on last era human-in-the-loop processes, the same way any organization that doesn't utilize artificial lighting, electricity, and instant communication will obviously be left far behind. You have to just trust that the machine running stock market transactions will do its math right.
But unlike transaction software failure modes, which quickly result in outright crashes or verifiably incorrect errors, failure modes of non-bayesian decision making software probably looks something like what happens when existing economic, geopolitical, and military decision-makers make decisions that are harmful, unethical, or otherwise undesirable for humanity. This time augmented with, if not superhuman intelligence, at least superhuman speed and superhuman knowledge breadth.
Love that observation on C++. That's the reason I love C++. It's a language for those who need, nay crave, absolute raw performance. No training wheels. Short of assembly, it's just as close to the machine as you can get.
Sure. I use Java, Python and Javascript all the time. But when I need the performance, for demanding VR/ graphics, nothing comes close to combination of speed and expressive power of abstraction of C++.
Oh yeah, it will replicate after the computer is shut down and then reinstalled from scratch. Especially when it's much simpler than that i.e. the whole thing lives in a throwaway container.
> I really can't grasp how people think that a system that doesn't have a need to preserve itself will somehow start thinking for itself.
Society exists because cooperation outperforms the alternatives. If you have human level AI at some point there is no benefit to cooperation and a major incentive to prevent anyone else gaining access to equal/better AI.
AI itself does not need to have any motivation - people in charge have plenty of incentives to eliminate the rest once they don't need them anymore.
What makes you think they're predicting the apocalypse correctly, then?
Another thing the technical geniuses tend to be good at is exploiting the power they suddenly obtain in their own interest, either directly or with regulations and collusion with those who hold actual hard power.
Evil AI owners seem to be much closer and far more material than an evil AI, and coincidentally it's something that is almost entirely lacking from the discourse, as public attention is too focused on sci-fi hypotheticals.
The bar is different - saying "there is no risk of apocalypse" requires you to be ~100% certain, because it you're saying "I'm 99% certain that there won't be a an apocalypse" then you're on the side of the AI-risk people, because a low-probability extinction event does justify action; the risk argument isn't that apocalypse is certain but rather that it is sufficiently plausible to take preventive measures.
I am only 99% certain that we won't be invaded by hostile aliens. Therefore we should take measures like building a giant space laser to prevent that apocalypse.
It is somewhat similar, but substantially different - we can make a solid argument that the likelihood of getting invaded by hostile aliens in the nearest century is far lower than 1%, and also if such an invasion does happen, then building a giant space laser won't make any difference at all.
The key difference between powerful alien invaders and us creating a powerful alien entity that we can't control is that the former either will or won't happen due to external circumstances, but the latter is something we would be doing to ourselves and can avoid if we choose to.
Bullshit. You can't presume to quantify the probability of either event. You're just making things up. All of the arguments are built on a foundation of sand. This stuff falls in the realm of religion and philosophy, not hard science and math.
The issue is that the doomsday scenario is extremely vague. The actual mechanism of action of a hypothetical rogue AGI is usually handwaved away as "it will be self-improving, superhumanly persuasive, and far smarter than us, so it will somehow figure out how to do something, or convince us to do it". What exactly will happen? How exactly will it happen? Will the world do nothing until that moment? How do society, politics, military fit into that scenario? All that rationalist navel gazing I've seen so far is either hilariously unaware of the existence of the outside world or assumes it won't change in the process.
You can't fight what you can't even see, let alone not sure if it exists at all. You don't invent a pair of wings because 1900s' you thinks that "the scientists will invent an anti-aging cure in the next decade, and surely personal flight will be ubiquitous in 2000's". You don't design a plasma gun for your Mars landing just in case you land in a city between Martian canals and see an army of little green men there. The world doesn't work like that, by the time you reach the Mars surface the context will be wildly different. You get burned and put guardrails, maybe. Not the other way around. Nobody can see through higher order effects, no matter how smart they are. And as the threat becomes progressively more clear there will be more caution, if needed. Premature optimization yada yada.
What actually happens right now is everybody and my aunt seriously discussing the evil robots that will come and kill us. That's pure mass hysteria, caused by the scaremongering and the cult-like beliefs of very smart people with disproportional influence who can't contain their own conjectures and bullshit in the realm of science fiction.
On the other hand,the end goal of OpenAI is the major job replacement, according to their current charter. [1] "Broadly distributed"... will they distribute their utopia to North Korea? Not happening, isn't it? I think it's obvious that if the actual job replacement rate will ever get anywhere close to the levels of late 19th early 20th century industrialization, this will produce major societal shifts and struggles, wealth and power redistribution, and a lot of blood and wars. Because the dependence on your job is the only ephemeral influence you (as a worker) have on this world. And of course, the companies that control the AI will be gatekeepers, and they will be more than happy to close the open research and open source models, and pull the regulation ladder and lie in bed with politicians and military, like OpenAI already does for years, of course they realize that and their utopical self-contradicting "charter" is nothing more than marketing hogwash that they already changed and will change in the future.
This is far more realistic and will happen much earlier than the rogue AI science fiction, if happens at all. In fact it's slowly happening now, and it's not talked about nearly enough, because the attention is mostly misdirected onto the vague superhuman AI red herring.
The actual mechanism of action is handwaved away because there are many options, we don't expect to ever have an exhaustive list and specifics of those are largely irrelevant with respect to preventing them, so IMHO it's not worth spending time and effort analyzing specific scenarios as long as we assume that there exists at least one plausible (even if unlikely) scenario. A hypothetical specific scenario of a rogue AI engineering and launching a deadly supervirus is effectively equivalent to a specific scenario resulting in a world consumed by 'grey goo' nanobots - you don't (can't) fix the former by implementing some resilience or detection for diseases, you don't (can't) fix the latter by doing extra research on nanorobotics, you approach both (and any others) by tackling the core issues of, for example, ensuring that you can control what goals artificial agents have even if they are self-modifying in some aspects.
Like, "What exactly will happen? How exactly will it happen?" is worth discussing if and only if one party seriously believes they can convince the other that none of the imaginable scenarios are even remotely plausible; and if we assume that there is at least one scenario where we can say "I'm 99% certain it won't happen and 1% it could" then that discussion is pretty much over, the existential risk is plausible (and the consequences of that are so much incomparably larger than e.g. major job displacement that it justifies attention even if it's many orders of magnitude less likely) and we should instead talk about how to prevent it.
I'm not making the argument that the existence of stronger-than-human general AI will result in a catastrophe, but I am asserting that the mere existence of a stronger-than-human general AI (without some controls we currently can't figure out how to make or even if they are possible) carries at least some plausible chance of existential risk - for the sake or argument, let's say at least 1%; and I am asserting that a 1% of existential risk is a totally absolutely unacceptably high risk that must not be allowed to happen, because it is far more important[1] than e.g 100% certainty of major job displacement and social unrest.
"Will the world do nothing until that moment?" I think that what we saw from the global reaction to things like start of Covid-19 or climate change is completely sufficient to assume that we can't rely on the world stopping a major-but-stoppable issue in a timely manner, so "surely the world will do something" is not a sufficiently convincing argument to discount the risk; I don't think you can plausibly deny that even for a clearly catastrophic problem there is at least a 10% chance that the world could still delay sufficient action until it's too late; and this means that it doesn't really matter what the exact likelihood of that is based on society, politics, military aspects, we should work with the assumption that the world actually might do nothing to prevent any specific scenario from unfolding, and we should de-risk it in other ways.
[1] Looking at other posts, perhaps this is where we'd disagree, and in that case it's probably the core of the discussion which also doesn't really depend on any details of specific scenarios.
If you read what Yann writes you'll pretty quickly see that he's rather ignorant about AI. His opinion is probably worse on average than the typical technical generalist's
That’s hilarious. I have read a few things he has written which suggests he’s definitely better than the average technical generalist. I haven’t read everything obviously but he has written quite a lot https://scholar.google.com/citations?user=WLN3QrAAAAAJ
You're missing the point and context. Person above me says this analysis is quite telling, then points out the counterfactual historical hypothetical, which makes no sense. Yann thinks supersonic flight is not worthy of precautionary principle ethics in 1925? I'm saying the same thing--Yann's terrible, nonsense analogy is indeed poorly argued, but plausibly would make sense as a Freudian slip inconsistency of some sort. Ergo, "it is quite telling". As to what contents in his mind, or his motivations, I don't care to speculate.
The fact that you make insinuations about what I think is pretty aggressive and terrible similarly, this forum ought to have better manners than that when writing replies to complete strangers. Not everyone who has a different opinion is some crypto conspiracy theorist, and you are wrong to jump to such a suggestion.
> I guess things like recursive self-improvement. You wouldn’t want to let your little AI go off and update its own code without you having oversight. Maybe that should even be a licensed activity—you know, just like for handling anthrax or nuclear materials.
He’s clearly aware of the risks of runaway, self-improving AI, and the idea that we can prevent this with regulation is laughable. The car is barreling towards the edge of the cliff, and many of our best and brightest have decided to just put a blindfold on and keep flooring it.
Lots of people I know dislike the taste of Beyond Meat specifically, but like Impossible and other fake meats. This article seems to be sensationalizing and over-generalizing from a single company’s failure.
They also used massive doses of LSD combined with electroshocks and managed to entirely wipe some patients’ memories, this person describing their experience is pretty mind-blowing:
reply