Recently, we watched the “movie” “Master of AI”, a trilogy of dramas produced by Silicon Valley’s bigwig venture capitalists and tech giants with an investment of more than $10 billion, including three episodes of “Partners in AI”, “Two Paths” and “The Return of the King”. Many applauded Sam Altman’s return to the “throne”, with some even comparing it to Jobs’ return to Apple.
However, the two are simply not comparable. “Master of AI” is a completely different story, it tells a battle between two paths: to pursue profit, or not to pursue? That’s the crux of the matter!
Let’s revisit the beginning of The Lord of the Rings. When Gandalf sees the Lord of the Rings at Uncle Bilbo’s house, he quickly realizes that such a powerful thing cannot be handled by ordinary people. Only some holy and otherworldly people, like Frodo, can deal with it. That’s why Frodo is the heart of the team – he’s the only one who can carry something so powerful without being swallowed up by it. Not Gandalf, not ALGO, not Legolas, not Gimli, only Frodo. The key to the whole Lord of the Rings story is Frodo’s unique nature.
*Note: Sam Altman is the CEO of OpenAI, Ilya Sutskever is one of the co-founders of OpenAI (he disagreed with Sam Altman on OpenAI’s path choice and was ultimately marginalized), and Greg Brockman is the CTO of OpenAI. Reid Hoffman is a well-known entrepreneur and venture capitalist who was a co-founder of LinkedIn. Jessica Livingston is one of the founding partners of Y Combinator, a venture capital firm. Peter Thiel is a well-known entrepreneur, venture capitalist, and co-founder of PayPal. *
Now, switch back to the beginning of Master of AI. In 2015, Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Elon Musk, and a number of tech companies announced the formation of OpenAI and pledged to inject more than $1 billion into the venture capital fund. This is a group of the world’s smartest brains, almost as smart as Gandalf. They also know that they are building something powerful, like the Lord of the Rings, that should not be owned and controlled by anyone pursuing their own interests. It must be mastered by selfless people, like Frodo. So, instead of launching a for-profit company, they established OpenAI as a non-profit research organization, presumably not for profit.
The notion that “such a powerful thing shouldn’t be controlled by a profit-oriented company” may not have been the consensus of OpenAI’s co-founders at the time of its founding. This is most likely the reason why these founders came together when they decided to start OpenAI in the first place. Before OpenAI was founded, Google had already demonstrated the potential to exercise this superpower. It looks like OpenAI is a “coalition of protectors” made up of these visionary “protectors of humanity” to counter the AI monster that Google is turning into, a profit-seeking company. Ilya may have been persuaded to leave Google to lead OpenAI’s R&D because of her belief in this philosophy, because Ilya’s move would be meaningless from any other point of view. Back in 2015, no one was able to provide a better AI development platform than Google. Even though OpenAI’s promoters are all Silicon Valley tycoons, none of them are AI practitioners (they don’t code at all). Not to mention the financial disadvantage: OpenAI is clearly not as well-funded as Google. The founders promised $1 billion, but only about 10% fulfilled ($100 million from Elon Musk or $30 million from other donors). From the point of view of personal financial returns, a non-profit organization cannot provide Ilya with better financial compensation than working at Google. The only thing that might convince Ilya to leave Google to lead OpenAI is this philosophy. Ilya’s philosophical ideas are not as well known to the public as his doctoral supervisor. Geoffrey Hinton left Google in 2023 due to disillusionment with the politics of the Ronald Reagan era and dissatisfaction with military funding of AI.
In short, the founders wanted OpenAI to be their Frodo, carrying the “Lord of the Rings” for them.
However, life in science fiction or movies is much easier. In the movie, the solution is very simple. Tolkien simply created the character of Frodo, a selfless fellow who can resist the temptation of the Lord of the Rings, protected from physical attacks by the “Lord of the Rings”.
To make the character of Frodo more believable and natural, Tolkien even created a naïve, kind, and selfless race – the Hobbits. As the archetypal erect, kind-hearted hobbit, Frodo naturally became the chosen one, able to resist temptations that even a wise Gandalf could not resist. If Frodo’s nature is attributed to the racial characteristics of the hobbits, then Tolkien’s solution to the biggest problem of the “Lord of the Rings” is essentially racist, pinning the hopes of humanity on the noble character of a certain race. As a non-racist, while I can enjoy the plot of superheroes (or superhero races) solving problems in novels or movies, I can’t be so naïve to think that the real world is as simple as the movies. In the real world, I don’t believe in this solution.
The real world is just much more complicated. In the case of OpenAI, most of the models built by OpenAI (especially the GPT family) are monsters of computing power, relying on electricity-powered chips (mostly GPUs). In the capitalist world, this means that it is in great need of capital. Therefore, without the blessing of capital, OpenAI’s model would not have developed into what it is today. In this sense, Sam Altman is a key figure as the company’s resource center. Thanks to Sam’s Silicon Valley connections, OpenAI has strong support from investors and hardware vendors.
There’s a reason for the resources that flow into OpenAI to drive models – profits. Wait, isn’t OpenAI a non-profit organization? Well, technically yes, but something has changed at the bottom. While maintaining its nominally non-profit structure, OpenAI is transforming more into a for-profit entity. This happened in 2019 when OpenAI Global LLC was launched, a for-profit subsidiary set up to legally attract venture funds and give shares to employees. This clever move aligns OpenAI with the interests of investors (not donors this time, so probably in pursuit of profit). Through this consistency, OpenAI can grow with the blessing of capital. OpenAI Global LLC has had a profound impact on OpenAI’s growth, specifically attached to Microsoft, securing a $1 billion investment (and later billions) and running OpenAI’s computing monster on Microsoft’s Azure-based supercomputing platform. We all know that a successful AI model requires three things: algorithms, data, and computing power. OpenAI brings together the world’s top AI experts for their model’s algorithms (a reminder: this also depends on capital, and OpenAI’s team of professionals doesn’t come cheap). ChatGPT’s data comes primarily from the open internet, so it’s not a bottleneck. Computing power built on chips and electricity is an expensive project. In a nutshell, half of these three elements are primarily provided by OpenAI Global LLC’s earnings structure. Without this constant supply of fuel, OpenAI wouldn’t have been able to get this far without donations alone.
But this comes at a cost. It is almost impossible to be blessed by capital while maintaining independence. What is now called a non-profit framework is more nominal than substantive.
There are many indications that the feud between Ilya and Sam is precisely about this path choice: llya seems to be trying to stop OpenAI from deviating from the direction they originally set.
There is also a theory that Sam made a mistake in the so-called Q-model breakout event, which led to this failed coup. But I don’t believe OpenAI’s board of directors would fire a very successful CEO for getting it wrong on a particular issue. This so-called error in the Q model breakout, if it exists, is at best a trigger.
The real problem with OpenAI may be that it has strayed from its original path. In 2018, Elon Musk parted ways with Sam for the same reason. And it seems that in 2021, the same reason led a group of former members to leave OpenAI to start Anthropic. In addition, at the time of the episode, Elon Musk’s anonymous letter posted on Twitter also pointed to the problem.
To make a profit or not to make a profit, the question seems to find an answer at the end of “The Return of the King”: with the return of Sam and the exile of Ilya, the battle for the road is over. OpenAI is destined to be a de facto profitable company (probably still with a non-profit shell).
But don’t get me wrong. I’m not saying that Sam is a bad guy and Ilya is a good guy. I’m just pointing out that OpenAI is in a dilemma, which can be called a super-company dilemma:
A company that operates with the goal of making a profit can be controlled by the capital invested in it, which can pose some dangers, especially if the company is building a super-powerful tool. And if it doesn’t operate with the goal of making a profit, it may face a lack of resources, and in a capital-intensive sector, which means it may not be able to build a product at all.
In fact, the birth of any super-powerful tool raises similar concerns about control, not limited to the corporate sphere. Take, for example, the recently released film Oppenheimer. When the atomic bomb was successfully exploded, Oppenheimer felt more fear than joy. Scientists at that time wanted to create a supranational organization to monopolize nuclear forces. The idea is similar to what OpenAI’s founders thought at the time - that something as super-powerful as the atomic bomb should not be in the hands of a single organization, or even the US government. It’s not just an idea, it’s a real action. Theodore Hall, a physicist in the Manhattan Project, leaked key details of the atomic bomb’s creation to the Soviet Union, acknowledging in a 1997 statement that the “U.S. monopoly on nuclear weapons” was “dangerous and should be avoided.” In other words, Theodore Hall helped decentralize nuclear bomb technology. The way to decentralize nuclear power by leaking secrets to the USSR was clearly a controversial practice (the Rosenbergs were even executed by the electric chair for leaks, despite the evidence that they were wronged), but this reflected the consensus of scientists at the time (including Oppenheimer, the father of the atomic bomb) - such a super-powerful thing should not be monopolized! But I’m not going to dive into how to deal with something super powerful, because that’s too broad a topic. Let’s refocus our attention on the issue of super-powerful tools controlled by profit-oriented companies.
So far, we still haven’t mentioned Vitalik in the title of the article. What does Vitalik have to do with OpenAI or the Lord of the Rings?
That’s because Vitalik and the founders of ETH were in a very similar situation.
In 2014, when the founders of ETH Workshop launched ETH Workshop, they were divided over whether the legal entity to be established would be a non-profit organization or a for-profit corporation. The final choice, like OpenAI at the time, was a non-profit organization, the ETH Foundation. At the time, the disagreement between the founders of ETH Workshop was probably greater than that between the founders of OpenAI, leading to the departure of some founders. In contrast, establishing OpenAI as a non-profit organization was a consensus among all the founders. The disagreement over OpenAI’s path came later.
As an outsider, it’s unclear to me whether the disagreement among ETH founders stems from their expectation that ETH Workshop would be a super-powerful “Lord of the Rings” and therefore should not be controlled by a profit-oriented entity. But it doesn’t matter. Important: While ETH Fang has grown into a powerful thing, the ETH Fang Foundation is still a non-profit organization to this day, and it doesn’t face a yes or no dilemma like OpenAI. In fact, as of today, it doesn’t matter that ETH Foundation is a non-profit organization or a for-profit company. Perhaps this issue was more important when ETH was first launched, but that is no longer the case today. The powerful ETH Fang itself has an autonomous life of its own, and is not controlled by the ETH Fang Foundation. In the course of its development, the ETH Foundation also seems to be facing a financing problem similar to OpenAI. For example, I heard Xiao Feng, one of the early donors of the ETH Fang Foundation, complain at a seminar that the ETH Fang Foundation was too poor to provide adequate financial support to developers. I don’t know how poor the ETH Foundation actually is, but this financial constraint doesn’t seem to have affected the development of the ETH Foundation. In contrast, some well-funded blockchain foundations can’t grow into a thriving ecosystem simply by burning money. In this world, capital is still important, but only to some extent. And in the case of OpenAI, there is no capital? No way!
ETH and artificial intelligence are, of course, completely different technologies. But one thing is similar: the development of both relies on significant resource investment (or capital investment). (Note: Developing the ETH code itself may not require much capital, but I’m referring to building the entire ETH system.) In order to attract such a large amount of capital investment, OpenAI had to deviate from its original intentions and quietly transform into a de facto profitable company. On the other hand, despite attracting a lot of capital into the system, ETH Fang is not controlled by any for-profit organization. With the blessing of capital and not under its control - it’s almost a miracle!
Vitalik was able to do this because Vitalik had his Frodo - Blockchain!
Let’s classify technologies into two categories based on whether they actually produce a product: one is production technology and the other is connection technology. Artificial intelligence belongs to the former, while blockchain belongs to the latter. AI can perform many production activities, such as ChatGPT generating text, Midjourney generating images, and robots producing cars in TSL’s unmanned factories.
Technically, blockchain doesn’t produce anything. It’s just a state machine and can’t even initiate any operations on its own. But as a connectivity technology, its importance lies in providing a paradigm of large-scale human collaboration that goes beyond traditional for-profit companies. Essentially, a corporation is a covenant between shareholders, creditors, the board of directors, and management. The validity of the contract is that if one party breaches the contract, the other party can sue in court. And the validity of this prosecution lies in the fact that its results are carried out by the state apparatus (the so-called law enforcement). So, fundamentally, the company is a contractual relationship enforced by a state machine. But now, blockchain brings us a new way of contracting that is enforced by technology. While BTC blockchain contracts are still very functionally specific (and deliberately kept so), ETH Fang’s smart contracts extend this new way of contracting to be universal. Basically, ETH Workshop allows humans to collaborate at scale in a whole new way in many fields, unlike the profit-oriented companies of the past. For example, DeFi is a new way for people to collaborate in the financial sector.
In this sense, blockchain is a “super company”! It is because of this “super company” paradigm that ETH Fang has been able to grow into the thriving state it is today, without having to face OpenAI’s corporate woes. The blockchain is Vitalik’s Frodo, carrying the “Lord of the Rings” without being consumed by its power.
So now you can see that Frodo has been a key character behind all these stories:
Gandalf was lucky because he had Frodo as a friend in the fantasy world.
Vitalik is also lucky because in the new world he has his Frodo – blockchain.
Ilya and the other OpenAI founders weren’t so lucky, because they were in an old world where Frodo didn’t exist.
Link to original article
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
The Lord of the Rings, OpenAI and ETH Workshop: Why didn't Vitalik become the Sam Altman who was driven away?
Original author: 0xAlpha
Original Editor: GaryMa Wu said blockchain
Recently, we watched the “movie” “Master of AI”, a trilogy of dramas produced by Silicon Valley’s bigwig venture capitalists and tech giants with an investment of more than $10 billion, including three episodes of “Partners in AI”, “Two Paths” and “The Return of the King”. Many applauded Sam Altman’s return to the “throne”, with some even comparing it to Jobs’ return to Apple.
However, the two are simply not comparable. “Master of AI” is a completely different story, it tells a battle between two paths: to pursue profit, or not to pursue? That’s the crux of the matter!
Let’s revisit the beginning of The Lord of the Rings. When Gandalf sees the Lord of the Rings at Uncle Bilbo’s house, he quickly realizes that such a powerful thing cannot be handled by ordinary people. Only some holy and otherworldly people, like Frodo, can deal with it. That’s why Frodo is the heart of the team – he’s the only one who can carry something so powerful without being swallowed up by it. Not Gandalf, not ALGO, not Legolas, not Gimli, only Frodo. The key to the whole Lord of the Rings story is Frodo’s unique nature.
*Note: Sam Altman is the CEO of OpenAI, Ilya Sutskever is one of the co-founders of OpenAI (he disagreed with Sam Altman on OpenAI’s path choice and was ultimately marginalized), and Greg Brockman is the CTO of OpenAI. Reid Hoffman is a well-known entrepreneur and venture capitalist who was a co-founder of LinkedIn. Jessica Livingston is one of the founding partners of Y Combinator, a venture capital firm. Peter Thiel is a well-known entrepreneur, venture capitalist, and co-founder of PayPal. *
Now, switch back to the beginning of Master of AI. In 2015, Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Elon Musk, and a number of tech companies announced the formation of OpenAI and pledged to inject more than $1 billion into the venture capital fund. This is a group of the world’s smartest brains, almost as smart as Gandalf. They also know that they are building something powerful, like the Lord of the Rings, that should not be owned and controlled by anyone pursuing their own interests. It must be mastered by selfless people, like Frodo. So, instead of launching a for-profit company, they established OpenAI as a non-profit research organization, presumably not for profit.
The notion that “such a powerful thing shouldn’t be controlled by a profit-oriented company” may not have been the consensus of OpenAI’s co-founders at the time of its founding. This is most likely the reason why these founders came together when they decided to start OpenAI in the first place. Before OpenAI was founded, Google had already demonstrated the potential to exercise this superpower. It looks like OpenAI is a “coalition of protectors” made up of these visionary “protectors of humanity” to counter the AI monster that Google is turning into, a profit-seeking company. Ilya may have been persuaded to leave Google to lead OpenAI’s R&D because of her belief in this philosophy, because Ilya’s move would be meaningless from any other point of view. Back in 2015, no one was able to provide a better AI development platform than Google. Even though OpenAI’s promoters are all Silicon Valley tycoons, none of them are AI practitioners (they don’t code at all). Not to mention the financial disadvantage: OpenAI is clearly not as well-funded as Google. The founders promised $1 billion, but only about 10% fulfilled ($100 million from Elon Musk or $30 million from other donors). From the point of view of personal financial returns, a non-profit organization cannot provide Ilya with better financial compensation than working at Google. The only thing that might convince Ilya to leave Google to lead OpenAI is this philosophy. Ilya’s philosophical ideas are not as well known to the public as his doctoral supervisor. Geoffrey Hinton left Google in 2023 due to disillusionment with the politics of the Ronald Reagan era and dissatisfaction with military funding of AI.
In short, the founders wanted OpenAI to be their Frodo, carrying the “Lord of the Rings” for them.
However, life in science fiction or movies is much easier. In the movie, the solution is very simple. Tolkien simply created the character of Frodo, a selfless fellow who can resist the temptation of the Lord of the Rings, protected from physical attacks by the “Lord of the Rings”.
To make the character of Frodo more believable and natural, Tolkien even created a naïve, kind, and selfless race – the Hobbits. As the archetypal erect, kind-hearted hobbit, Frodo naturally became the chosen one, able to resist temptations that even a wise Gandalf could not resist. If Frodo’s nature is attributed to the racial characteristics of the hobbits, then Tolkien’s solution to the biggest problem of the “Lord of the Rings” is essentially racist, pinning the hopes of humanity on the noble character of a certain race. As a non-racist, while I can enjoy the plot of superheroes (or superhero races) solving problems in novels or movies, I can’t be so naïve to think that the real world is as simple as the movies. In the real world, I don’t believe in this solution.
The real world is just much more complicated. In the case of OpenAI, most of the models built by OpenAI (especially the GPT family) are monsters of computing power, relying on electricity-powered chips (mostly GPUs). In the capitalist world, this means that it is in great need of capital. Therefore, without the blessing of capital, OpenAI’s model would not have developed into what it is today. In this sense, Sam Altman is a key figure as the company’s resource center. Thanks to Sam’s Silicon Valley connections, OpenAI has strong support from investors and hardware vendors.
There’s a reason for the resources that flow into OpenAI to drive models – profits. Wait, isn’t OpenAI a non-profit organization? Well, technically yes, but something has changed at the bottom. While maintaining its nominally non-profit structure, OpenAI is transforming more into a for-profit entity. This happened in 2019 when OpenAI Global LLC was launched, a for-profit subsidiary set up to legally attract venture funds and give shares to employees. This clever move aligns OpenAI with the interests of investors (not donors this time, so probably in pursuit of profit). Through this consistency, OpenAI can grow with the blessing of capital. OpenAI Global LLC has had a profound impact on OpenAI’s growth, specifically attached to Microsoft, securing a $1 billion investment (and later billions) and running OpenAI’s computing monster on Microsoft’s Azure-based supercomputing platform. We all know that a successful AI model requires three things: algorithms, data, and computing power. OpenAI brings together the world’s top AI experts for their model’s algorithms (a reminder: this also depends on capital, and OpenAI’s team of professionals doesn’t come cheap). ChatGPT’s data comes primarily from the open internet, so it’s not a bottleneck. Computing power built on chips and electricity is an expensive project. In a nutshell, half of these three elements are primarily provided by OpenAI Global LLC’s earnings structure. Without this constant supply of fuel, OpenAI wouldn’t have been able to get this far without donations alone.
But this comes at a cost. It is almost impossible to be blessed by capital while maintaining independence. What is now called a non-profit framework is more nominal than substantive.
There are many indications that the feud between Ilya and Sam is precisely about this path choice: llya seems to be trying to stop OpenAI from deviating from the direction they originally set.
There is also a theory that Sam made a mistake in the so-called Q-model breakout event, which led to this failed coup. But I don’t believe OpenAI’s board of directors would fire a very successful CEO for getting it wrong on a particular issue. This so-called error in the Q model breakout, if it exists, is at best a trigger.
The real problem with OpenAI may be that it has strayed from its original path. In 2018, Elon Musk parted ways with Sam for the same reason. And it seems that in 2021, the same reason led a group of former members to leave OpenAI to start Anthropic. In addition, at the time of the episode, Elon Musk’s anonymous letter posted on Twitter also pointed to the problem.
To make a profit or not to make a profit, the question seems to find an answer at the end of “The Return of the King”: with the return of Sam and the exile of Ilya, the battle for the road is over. OpenAI is destined to be a de facto profitable company (probably still with a non-profit shell).
But don’t get me wrong. I’m not saying that Sam is a bad guy and Ilya is a good guy. I’m just pointing out that OpenAI is in a dilemma, which can be called a super-company dilemma:
A company that operates with the goal of making a profit can be controlled by the capital invested in it, which can pose some dangers, especially if the company is building a super-powerful tool. And if it doesn’t operate with the goal of making a profit, it may face a lack of resources, and in a capital-intensive sector, which means it may not be able to build a product at all.
In fact, the birth of any super-powerful tool raises similar concerns about control, not limited to the corporate sphere. Take, for example, the recently released film Oppenheimer. When the atomic bomb was successfully exploded, Oppenheimer felt more fear than joy. Scientists at that time wanted to create a supranational organization to monopolize nuclear forces. The idea is similar to what OpenAI’s founders thought at the time - that something as super-powerful as the atomic bomb should not be in the hands of a single organization, or even the US government. It’s not just an idea, it’s a real action. Theodore Hall, a physicist in the Manhattan Project, leaked key details of the atomic bomb’s creation to the Soviet Union, acknowledging in a 1997 statement that the “U.S. monopoly on nuclear weapons” was “dangerous and should be avoided.” In other words, Theodore Hall helped decentralize nuclear bomb technology. The way to decentralize nuclear power by leaking secrets to the USSR was clearly a controversial practice (the Rosenbergs were even executed by the electric chair for leaks, despite the evidence that they were wronged), but this reflected the consensus of scientists at the time (including Oppenheimer, the father of the atomic bomb) - such a super-powerful thing should not be monopolized! But I’m not going to dive into how to deal with something super powerful, because that’s too broad a topic. Let’s refocus our attention on the issue of super-powerful tools controlled by profit-oriented companies.
So far, we still haven’t mentioned Vitalik in the title of the article. What does Vitalik have to do with OpenAI or the Lord of the Rings?
That’s because Vitalik and the founders of ETH were in a very similar situation.
In 2014, when the founders of ETH Workshop launched ETH Workshop, they were divided over whether the legal entity to be established would be a non-profit organization or a for-profit corporation. The final choice, like OpenAI at the time, was a non-profit organization, the ETH Foundation. At the time, the disagreement between the founders of ETH Workshop was probably greater than that between the founders of OpenAI, leading to the departure of some founders. In contrast, establishing OpenAI as a non-profit organization was a consensus among all the founders. The disagreement over OpenAI’s path came later.
As an outsider, it’s unclear to me whether the disagreement among ETH founders stems from their expectation that ETH Workshop would be a super-powerful “Lord of the Rings” and therefore should not be controlled by a profit-oriented entity. But it doesn’t matter. Important: While ETH Fang has grown into a powerful thing, the ETH Fang Foundation is still a non-profit organization to this day, and it doesn’t face a yes or no dilemma like OpenAI. In fact, as of today, it doesn’t matter that ETH Foundation is a non-profit organization or a for-profit company. Perhaps this issue was more important when ETH was first launched, but that is no longer the case today. The powerful ETH Fang itself has an autonomous life of its own, and is not controlled by the ETH Fang Foundation. In the course of its development, the ETH Foundation also seems to be facing a financing problem similar to OpenAI. For example, I heard Xiao Feng, one of the early donors of the ETH Fang Foundation, complain at a seminar that the ETH Fang Foundation was too poor to provide adequate financial support to developers. I don’t know how poor the ETH Foundation actually is, but this financial constraint doesn’t seem to have affected the development of the ETH Foundation. In contrast, some well-funded blockchain foundations can’t grow into a thriving ecosystem simply by burning money. In this world, capital is still important, but only to some extent. And in the case of OpenAI, there is no capital? No way!
ETH and artificial intelligence are, of course, completely different technologies. But one thing is similar: the development of both relies on significant resource investment (or capital investment). (Note: Developing the ETH code itself may not require much capital, but I’m referring to building the entire ETH system.) In order to attract such a large amount of capital investment, OpenAI had to deviate from its original intentions and quietly transform into a de facto profitable company. On the other hand, despite attracting a lot of capital into the system, ETH Fang is not controlled by any for-profit organization. With the blessing of capital and not under its control - it’s almost a miracle!
Vitalik was able to do this because Vitalik had his Frodo - Blockchain!
Let’s classify technologies into two categories based on whether they actually produce a product: one is production technology and the other is connection technology. Artificial intelligence belongs to the former, while blockchain belongs to the latter. AI can perform many production activities, such as ChatGPT generating text, Midjourney generating images, and robots producing cars in TSL’s unmanned factories.
Technically, blockchain doesn’t produce anything. It’s just a state machine and can’t even initiate any operations on its own. But as a connectivity technology, its importance lies in providing a paradigm of large-scale human collaboration that goes beyond traditional for-profit companies. Essentially, a corporation is a covenant between shareholders, creditors, the board of directors, and management. The validity of the contract is that if one party breaches the contract, the other party can sue in court. And the validity of this prosecution lies in the fact that its results are carried out by the state apparatus (the so-called law enforcement). So, fundamentally, the company is a contractual relationship enforced by a state machine. But now, blockchain brings us a new way of contracting that is enforced by technology. While BTC blockchain contracts are still very functionally specific (and deliberately kept so), ETH Fang’s smart contracts extend this new way of contracting to be universal. Basically, ETH Workshop allows humans to collaborate at scale in a whole new way in many fields, unlike the profit-oriented companies of the past. For example, DeFi is a new way for people to collaborate in the financial sector.
In this sense, blockchain is a “super company”! It is because of this “super company” paradigm that ETH Fang has been able to grow into the thriving state it is today, without having to face OpenAI’s corporate woes. The blockchain is Vitalik’s Frodo, carrying the “Lord of the Rings” without being consumed by its power.
So now you can see that Frodo has been a key character behind all these stories:
Link to original article