Opinions expressed by Entrepreneur contributors are their own.
Taking the existing AI technology to the human level of intelligence has been the holy grail of AI engineers for quite some time. Although such superintelligence would bring immense benefits to humankind, the development of AI has always had that dreadful factor to it — whether by the influence of science-fiction literature, movies or the top executives of Silicon Valley — and rightfully so.
It seems that we humans are not able to adequately comprehend and address the dangers of building the superintelligence, and our response or outlook to the dangers and avail of it is mixed. This is because the development of AI tech could vastly improve our lives — from curing cancer and other fatal diseases to creating cheap products in abundance, but there are dangers of developing this tech too.
Superior machines will improve themselves without our involvement
If we assume that we're going to improve our technology virtually forever, we can infer that the creation of superintelligence is highly probable, and the rate at which it accelerates is understated. In a not so distant future, there will come a day when we build machines that are smarter than us. And then those superior machines will improve themselves without our involvement.
Then you have a Matrix scenario, where superior technology emerges, and for the first time in millions of years, we're not the smartest beings on this planet. At that stage, we cannot be sure that our goals and AI's goals coincide, and in case there is divergence, we’ll find ourselves at war with the technology. Once there is such superintelligence, the future is going to be shaped by its preferences. More specifically, considering the very nature of AI tech, it maximizes efficiency, and if we're in its way, it might treat us with the same disregard that we reserve for other animal species.
Such a scenario is theoretically possible, but it’s hard to tell when it is going to happen or if it's going to happen at all. After all, humans are imperfect beings — and can imperfect beings come up with something that has no weakness and is virtually perfect? Additionally, AI can be developed so that its behavior and decision-making model are aligned with what we deem acceptable. In other words, AI tech can be developed so that its core principles are noble and non-violent.
The benefits of AI tech
Although the future of AI may be scary, we might be 30 to 70 years away from creating something with superior intelligence. In the meantime, though, the benefits of AI tech across a range of industries can be tremendous.
AI tech development can lead to the creation of cheap products in abundance, hence significantly decreasing the cost of living and pulling many people out of poverty. Cheaper technology can lead to the creation of totally new products and industries, contributing to the increase in the number of jobs.
Use cases of AI tech in medicine are almost infinite. We could imagine the AI diagnosing and even predicting cancer and other deadly diseases in advance, therefore enabling the person to employ preventive measures. AI tech could even help us develop cures for diseases that remain undefeated by medicine for now. Huge advancements in the field of medicine would automatically mean extended human lifespan and improved quality of life. The medicine might become so tailored to one’s specific genome that we might get a huge boost to our fitness and overall health and no longer retire at 65, resolving the aging population crisis, wherein there are not enough people who produce.
AI tech can improve the cybersecurity of various systems, making the online space and platforms much more secure.
Self-driving cars that have almost zero accidents will save human lives.
AI tech could be used to make quick sense of financial data or public sentiment regarding certain issues. In short, AI analytics tools will be immensely powerful and precise.
We could continue with this list for quite some time, but the main point is that AI tech is going to affect almost every aspect of our lives, and the positive implications are huge.
Back to reality now.
At this point, AIs operate in narrow environments. There will come a time when different AIs are able to communicate and learn from each other. This is exactly what our company is trying to achieve. Our company is developing a protocol for different AIs to enable connectivity and data exchange. We are building a marketplace for AI services, and we enable different AI technologies to communicate with each other, exchange data, learn from each other and trade services. We are trying to democratize access to AI services. The less concentrated the AI power is, the less risk will be borne by the human species collectively.
Commonly, there is confusion among the public whether AI tech will bring more good for humanity or bad. Our company believes that in the case that AI tech is developed responsibly, tested in a confined and controlled environment, and aligned with our core values, there is little risk of it getting out of control. Meanwhile, its benefit to humanity will be invaluable.
The most crucial time is now when the initial conditions for the AGI should be determined and fed. Building benevolent AGI is extremely hard, but definitely within our reach.