Why the Future Doesn’t Need Us?

A Penny for Bill Joy’s Thoughts

Shivani Gandhi
7 min readJan 4, 2022


Meet Bill Joy, the co-founder of Sun Microsystems, who played a crucial role in developing Java, Jini, and even VI. With his extensive experience, it’s no surprise he’s not a Luddite. In his renowned essay, “Why the Future Doesn’t Need Us,” Joy expresses his concerns that modern technology could pose a threat to both the planet and humanity. Despite his instrumental contributions, Joy highlights the need for caution in the development of technology.

Bill Joy’s Famous Essay: Why the Future Doesn’t Need Us

In a recent conversation with inventor Ray Kurzweil, Joy was taken aback by the acceleration of technology and the possibility of superior robots. Joy’s interest was piqued, leading him to read Kurzweil’s book, “The Age of Spiritual Machines,” which explores the idea of human beings merging with robotics to achieve immortality. As the world moves closer to a future of advanced technology, it’s worth considering the implications of these advancements on human life.

When Joy heard Ray’s compelling arguments about Kurzwell’s futuristic ideas, he was taken aback. As someone he respected, Ray’s endorsement of these concepts made Joy realize that they could indeed become a reality. Despite his initial surprise, Joy was quick to agree with Kurzwell’s basic claims, given Ray’s proven ability to imagine and create the future. However, Joy still believes that the potential dangers of these concepts are understated. Read on to explore the fascinating debate between Joy and Ray over Kurzwell’s vision of the future.

Joy goes on to talk about the fears of potential dangers of twenty-first century technologies, brought up as GNR — genetics, nanotechnology, and robotics. The biggest factor with these technologies compared to twentieth century technology is the ability to self-replicate. This gives great power to humans who could misuse it but could inadvertently lead to uncontrollable self-replication from GNR itself. From an argument Joy poses from Kaczynski,

“If we are at the mercy of our machines, it is not that we would give them control or that they would take control, rather, we might become so dependent on them that we would have to accept their commands” (Joy,2000).

Building off of this, the article also mentions that giving control to a few humans would place the power in an “elite” role which could end up being malicious as well. From Joy’s experience in the field, he acknowledges that we will soon come to a point where we will have the power to create these types of technology, however he also mentions that we tend to “overestimate our ability to design”.

This insinuates that while we may achieve our goals, they still need to be refurbished, and in trying to perfect these AI systems, we may cause many unintended consequences. Hence, in Joy’s opinion, humility is necessary in developing technology, especially intelligent systems.

“Given the power of these systems, shouldn’t we be asking how we can best coexist with them? And if our own extinction is a likely, or even possible, outcome of our technological development shouldn’t we proceed with great caution”? ((Joy,2000).

I agree with Joy here. As humans, we haven’t even completely understood ourselves and here we are trying to create machines that are “better” than us. Even though we may be edging near the possibility of having strong enough computational power, there are bound to be some mistakes. We should be careful to take care of these early on and be weary of these small issues as they arise, because if we continuously overlook them, they will accumulate into a “Frankenstein”.

Joy explicitly presents his point of relinquishment and limit the development of technology because the cons of dangerous consequences outweigh the pros of gaining more knowledge. He acknowledges why we keep pushing towards more development, because we humans have an innate drive to question and seek knowledge. Restricting this drive causes issues and problems to arise however, if this knowledge puts us in the clear path of extinction, common sense would tell us to reexamine and redefine our drive. Simply put, if having access to this type of knowledge is dangerous, why should we pursue it?

In concern with Oppenheimer, Joy brings about this exact point. The Trinity Test, the first atomic test, was an effort that was not just continued but pushed by Oppenheimer, who began to deviate from the original purpose of the project. In the quest to pursue knowledge and satisfy the needs of fulfillment, Oppenheimer openly and continuously justified all of the consequences of the atomic bombs.

“It is not possible to be a scientist unless you believe that the knowledge of the world, and the power which this gives, is a thing which is of intrinsic value to humanity, and that you are using it to help in the spread of knowledge and are willing to take the consequences.”

By this point in the article, Joy has expressed his opinions on GNR and artificial intelligence with machines eventually making decisions for us enslaving the human race. Posing his case, his ultimatum is to encourage society to take responsibility for their actions and encourage preventative measures to decrease the chances of technological dangers. When taking ethics into consideration, he mentions that scientists and engineers would have to adopt a strong code of ethical conduct and have to have the courage to whistle blow even at a high personal cost in order to verify the relinquishment of and provide transparency of twenty-first century technologies.

Joy has presented many points and arguments which to some extent have some relevance and important, however I think should be taken with a grain of salt. Joy presents a worst case possible scenario with the advancement of technology, and justifies most of them by appealing to emotions instead of facts. His view is almost pessimistic and negative in the Kant-like stance that he takes assuming humans are inherently evil (Hanson, Immanuel Kant: Radical Evil). Joy himself said

“We have to encourage the future we want rather than trying to prevent the future we fear”.

Is he not contradicting himself with this article? Joy, despite his experience in the field, is fearful of the future which is unknown. In justifying his rationale with emotions, he is not looking at the trends from facts and the possibility of a positive. If we look at technological advancements as a whole over the past century, despite some consequences, society as a whole has improved and the human race is better off than we were 100 years ago. If we take the healthcare industry as an example, treatments for cancer and other diseases have come a long way strictly due to technological advancements. If we were to follow Joys beliefs in “find[ing] alternative outlets for our creative forces, beyond the culture of perpetual economic growth”, where he’s talking about finding an outlet to facilitate the drive for power through knowledge, what would we say to future generations of less prosperity due to technological growth being stopped? Essentially we would be stuck in time. We would only know the things we currently know. There would be no progress anywhere without technological advancements.

What would we do in the age of another pandemic without technology? There’s nothing we can say to future generations because we would have no justification. “We stopped advancements because we were scared of dangerous possibilities for the future”, but we never knew if that would happen for sure. There’s no way we can justify that action. Joy’s concern of advancement leading to human extinction failed to consider that without any advancements, we would essentially be doing the same, since we wouldn’t be able to adapt with the changing environment around us.

Joy’s last attempt at appealing to his audience was once again at emotions. “Each of us has our precious things,” Joy says, “and as we care for them we locate the essence of our humanity. In the end, it is because of our great capacity for caring that I remain optimistic we will confront the dangerous issues now before us.” While, I remain optimistic, it isn’t in the sense that Joy thinks. If anything, it may be in the sense that Kurzweil believes. Throughout this argument, Kurzweil acknowledges the dangers of technology but also believes that we can make it through the consequences of advancements, which when considering previous trends and consequences is what is more likely. I do not believe we are on the track straight for destruction as Joy believes.

Many of Joy’s scenarios are a stretch and don’t consider facts. Sure there is some truth to his concerns. We should stay humble with advanced technology, but not because we overestimate our abilities, because we understand the power. We should have strong ethical principles, but not because humans are evil, because with knowledge comes great responsibility. Ultimately everything comes to an end, the only thing in life that’s promised is death, but following Joy’s theories would ease, if not speed up, the end of the human race, not prologue it.


B. Joy, “Why the Future Doesn’t Need Us,” Wired, 01-Apr-2000. [Online]. Available: https://www.wired.com/2000/04/joy-2/. [Accessed: 27-Jan-2021].

E. M. Hanson, “Immanuel Kant: Radical Evil,” Internet Encyclopedia of Philosophy. [Online]. Available: https://iep.utm.edu/rad-evil/#:~:text=Contrary%20to%20the%20latitudinarianism%20of,all%20of%20his%20or%20her. [Accessed: 27-Jan-2021].

J. Messerly, L. A. says: mess1955 says: D. C. Says: W. B. says: and V. M. says: “Summary of Bill Joy’s, ‘Why the future doesn’t need us,”,” Reason and Meaning, 02-Feb-2016. [Online]. Available: https://reasonandmeaning.com/2016/02/15/summary-of-bill-joys-why-the-future-doesnt-need-us/. [Accessed: 27-Jan-2021].



Shivani Gandhi

What better place to share my experiences than my own blog! :) I hope y'all enjoy!