Built on a faulty definition of intelligence, the Singularity meme is aninformal fallacywith limited utility that constricts our view of the future if we rely on it too heavily. As we continue to refine our collective model of a rapidly accelerating future dominated by convergence, we should look to more comprehensive scientific models to take its place.
Let me start off by saying that Ray Kurzweil’s The Age of Spiritual Machines is one of the most important books I have ever read. It ably makes the case for accelerating change and a resulting Singularity, so I highly recommend it to those interested in exploring the possible futures ahead of us.
Each definition contains valuable nuggets about how the future may unfold. Yet I have come to believe all three are fundamentally flawed due to their reliance on the vague term: “intelligence”.
Intelligence Remains Undefined: There is no objective, comprehensive, scientifically valid description of the term. Though it’s easy to believe we understand what intelligence is and how it works, we humans have not yet achieved consensus on an overarching definition nor its constituent properties. There are many theories, but an objective law has yet to emerge.
According to an APA report titled Intelligence: Knowns and Unknowns, “when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen somewhat different definitions.”
The Wikipedia definition reflects this vagueness:
Intelligence (also called intellect) is an umbrella term used to describe a property of the mind that encompasses many related abilities, such as the capacities to reason, to plan, to solve problems, to think abstractly, to comprehend ideas, to use language, and to learn. There are several ways to define intelligence. In some cases, intelligence may include traits such as creativity, personality, character, knowledge, or wisdom. However, most psychologists prefer not to include these traits in the definition of intelligence.
At the same time, the bulk of the AI theorists working to create Strong AI/AGI that matches or exceeds human intelligence are either 1) applying a very narrow definition of intelligence that equates one human brain or personality to a discrete unit of intelligence, or 2) building logical or neural processes step-by-step and refraining from venturing a concrete definition.
Definitions of the Singularity Rely on Vague Definitions of Intelligence that Don’t Hold Up: Singularity proponents and detractors alike go about making their arguments without questioning the underlying assumption that human intelligence is composed of discrete units. By and large, they either overtly or tacitly equate intelligence to the functions of an individual brain or system. This is not surprising considering how the brain likes to simplify subject and object so that we can go about living our lives. But that fundamental assumption appears to be wrong, and at the very least is far from verifiable.
“The one thing the ‘Singularity’ will in fact be able to achieve will be the commoditizing of intelligence.” -John
Here’s my response:
The gradual commoditization of processes and basic intelligence has been underway for a while already. Certainly I can see the water level rising. But, if the proper intelligence growth model is collective and individual intelligence amplification (IA) (Flynn’s research would certainly suggest the latter) then we’ll keep evolving right alongside AI. Perhaps this will be a grow-and-become-more- novel/specialized-or-be-commoditized model, but it certainly leaves some room, even in an abrupt singularity scenario, for non-commoditization of some or most “human” intelligence (which I think is the wrong way to view intelligence, it’s more a system property that manifests in agents).
That being said, super-smart tech will be very disruptive in the coming decade and it remains to be seen how quickly we’ll amplify our intelligence, but I do think acceleration in info, tech and comm will up our ability to cope and devote more brains to higher level functions.
Is the universe a giant computer rigged to generate life in multiple galaxies? Does it harness the power of both evolution and development for some specific purpose?
These are some of the questions that will be tackled next week at the world’s first ever Evo Devo Universe Conference held in Paris, France.
Organized by the Acceleration Studies Foundation, the conference will bring together some of the most progressive cosmologists, complexity theorists, systems thinkers, and philosophers currently “exploring and critiquing models, hypotheses, and questions relating to the extent and interaction of evolutionary (or quasi-evolutionary) and developmental (or quasi-developmental) processes in the universe and its subsystems.”
In other words, it’s a world-class pow-wow for the thinkers who are working to uncover the rule sets that govern information, physics, chemistry, and all universal processes. And it will probably catalyze the birth of some important new theories and research paths in the months to come. For example, it is possible that someone presenting at this conference will pave the way for a more comprehensive information theory that accounts technology and plays nicely with existing scientific laws.
Here’s what conference organizer John Smart, futurist and systems theorist (and good friend), had to say about the event as I caught him just before he left for San Jose airport yesterday:
Evo Devo Universe keynote speakers will include:
James N. Gardner, a complexity theorist and science essayist, with a background in philosophy and theoretical biology.
Francis Heylighen, a systems theorist and cyberneticist focusing on the evolution of complexity.
Laurent Nottale, a cosmologist and pioneering theorist in scale relativity and fractal space-time.
The Singularity Frankenstein has been rearing its morphous head of late and evoking reactions from a variety of big thinkers. The latest to draw a line in the sands of accelerating change is Kevin Kelly, Wired co-founder and evolutionary technologist, who makes a compelling case against a sharply punctuated and obvious singularity. His argument is based on the following points:
1) A Strong-AI singularity is unlikely to emerge before Google does it first.
“My current bet is that this smarter-than-us intelligence will not be created by Apple, or IBM, or two unknown guys in a garage, but by Google; that is, it will emerge sooner or later as the World Wide Computer on the internet,” writes Kelly.
More fundamentally, I think our system is consistently advancing its intelligence, making human intelligence non-static. Therefore the notion of Strong AI is an illusion because our basis for comparison 1) is constantly changing, and 2) is erroneously based on a simple assessment of the computational power of a single brain outside of environmental context, a finding backed by cognitive historian James Flynn.
So yes, Google may well mimic the human brain and out-compete other top-down or neural net projects, but it won’t really matter because intelligence will increasingly be viewed as a network related property. (It’s a technical point, but an important distinction.)
2) The Singularity recedes as we develop new abilities.
Kelly writes, “The Singularity is an illusion that will be constantly retreating—always ‘near’ but never arriving.”
This statement is spot-on. As we amplify our collective intelligence (IA) at an accelerating rate and develop new capabilities we get better at peering ahead. The implication is that we co-evolve with technology and information to do so, assimilating intelligence along the way. In such an IA scenario, there simply is no dichotomy between us and it. It’s a we.
While Kelly alludes to IA in his World Wide Computer statement, he could bloster his argument by stressing the connection between human, informational and technological evolution and development.
3) Imagining a sequence of scenarios doesn’t take into account system dynamics. Thinking machines must co-evolve with the environment in order for intelligence to be meaningful.
“Thinking is only part of science; maybe even a small part,” points out Kelly. “Without conducting experiments, building prototypes, having failures, and engaging in reality, an intelligence can have thoughts but not results. It cannot think its way to solving the world’s problems. There won’t be instant discoveries the minute, hour, day or year a smarter-than-human AI appears.”
Summary: Spivack’s observation that the web is saturating the world (rather than just enabling a super fast web that the world and humans can enter) reinforces the idea that our system as a whole is amplifying its total intelligence and capabilities, rather than just supporting the digitization and “upload” of everything. It’s a basic, yet profound distinction that fundamentally changes how we expect the future to unfold.
Nova Spivack has posted some interesting thoughts up on his personal Twine, noting that “The Web is starting to spread outside of what we think of as ‘the Web’ and into ‘the World.’” He points out that “the digital world is going physical”, an idea that opens up an array of new futures previously not imagined by thinkers who’ve largely focused on digitization and inner space as the inevitable human destiny. Spivack concludes that “Beyond just a Global Brain, we are really building a Global Body.”
This thinking resonates with me because it moves away from a human-centric view of the future (digitization is good because we can live forever) in favor of a more systems-centric explanation (the system as a whole is getting smarter for its own reasons). It also makes sense in the context of an ongoing discussion I’ve been having with good friend and EvoDevo systems thinker John Smart about the direct relationship between A) our collective drive to tunnel toward Inner Space (nanotech, chemistry, energy efficiency, etc.) and B) our drive to expand into Outer Space (exploration, space travel, universe mapping, manufacturing, resource discovery).
An increasingly intelligent, self-orgainzing web that furthers growth of both the Global Brain, a concept originally advanced by Francis Heylighen in 1995, and what Spivack calls the Global Body, seems like the necessary tissue connecting our Inner Space and Outer Space focused appendages. In other words, the web that Spivack observes is not only concerned with creating better simulations, but also with expanding reach and bettering physical capabilities.
This jives with the idea that the point of the game of life, including the human-created web, is to ensure the survival of our global system via knowledge gathering and expansion, and less with the species-centric view that the future is solely about digitizing ourselves and escaping our biological chains. If in fact we are living in a system that purposely or automagically (to borrow a term from another futurist colleague, Jerry Paffendorf) seeks to increase control over its perceived environment (COPE) in order to ensure survival and expansion, then the creation of a web that serves this system, rather than just its human components, seems perfectly rational.
From this perspective, a merger between the web and physical world makes a lot of sense as it accelerates the input, sorting and output of information, resulting in increased system quantification and knowledge generation. In other words, a world-as-web + web-as-world boosts both our collective intelligence and capabilities.
Of course, this sort of thinking steadily pulls us down the rabbit hole to a place where the physical world can be viewed as web and the web as increasingly physical. But, then again, we’re due for some serious paradigm shifts, aren’t we?
Yesterday, YouTube co-founder Chad Hurley shot off some optimistic predictions about the web video industry. He opined that ten years from now “online video broadcasting will be the most ubiquitous and accessible form of communication.”
I certainly buy that web video broadcasting will be near ubiquitous. Hurley’s reasoning nicely reflects my own:
“The tools for video recording will continue to become smaller and more affordable. Personal media devices will be universal and interconnected. Even more people will have the opportunity to record and share even more video with a small group of friends or everyone around the world.”
But I am not sure that I’m sold on web video as the “most accessible form of communication”.
Why? Not because I think it won’t explode – web video will to be massive by 2018. Rather, I believe it’s possible that some nascent comm technology may just zoom past web video during that span, or more likely, subsume it.
Enterprise prediction markets have been growing in popularity, but face three major hurdles to success: 1) lack of access to all relevant information, 2) regulatory concerns, and 3) adoption / sticky use. As these are resolved, new-age prediction markets will increase in value, diffuse more quickly and make us smarter as a species.
1. Lack of access to relevant information: My big takeaway from Wisdom of the Crowds, the prediction market bible by journalist James Surowiecki, was that a large group of humans can consistently out-predict individuals, but only if all the brains are knowledgable of the given topic area. For example, farmers won’t be great at predicting next year’s fashion colors – that will be left to the those with more direct exposure to the appropriate industry trends.
Prediction market guru Chris Masse points out a similar flaw plaguing most, if not all, enterprise prediction markets: lack of access to ”’experts’ and other ‘business leaders’”. Masse argues that minus this crucial top-level information a company’s internal “prediction markets would be clueless, useless, and worthless.”
Solutions: The obvious but eminently unpalatable solution is for corporations like Google, GE, and Microsoft that already utilize prediction markets to open-up access to more of their top-level data to employees or even the public. This would immediately result in better predictions, but would obviously benefit their numerous cut-throat competitors. It will take some time for big businesses to implement such transparent practices, though I can imagine the right start-ups could successfully implement such an open strategy and then scale.
On the flip side of coin, companies could up the incentives for successful predicting in external but vastly larger markets, essentially throwing more money and brains at the process. They could then make use of the growing # of top rated performers and ideas (would be shocked if they’re not already mining such data). It seems like this will gradually occur as 1) companies increasingly look to the web for ideas, 2) the semantic web and better search makes everyone smarter faster.
Then again, a more immediately plausible middle road could involve bringing on a group of professional predictors, say 40 – 100 diverse individuals, and then give them access to the highest level information. Of course, they would be required to live in a cave and never again communicate with friends or family…
A one-stop shop for ancestral information, Footnote aggregates, sorts and structures historical documents “relating to the Revolutionary War, Civil War, WWI, WWII, US Presidents, historical newspapers, naturalization documents, etc”, then mixes in social networking and user feedback to create useful timelines, historical links and family trees. Basically, they’re trying to corner the market on ancestral information by taking the most comprehensive approach possible.
It’s a brilliant and inevitable idea. As Facebook, MySpace, Orkut, LinkedIn, Google, and Wikipedia dominate the social networking and information pie, other companies looking to strike it rich are forced to carve out more focused value niches outside the direct scope of the big boys. From a macro perspective, it’s clear that these companies need to mix a monetizable model with novel/valuable content and a good user experience. And that’s exactly what Footnote is trying to pull off here.
By focusing on historical information, Footnote is avoiding major head-on competition (though Google certainly will make a big dent, but – then again – is also a likely acquirer) as it tries to rapidly grow community and data value. As a result, it has become yet another force behind the relatively nascent Retro-Quant trend, essentially making it a smarter historian thanks to it’s unique techno-social approach.
The fact that such a business model makes perfect economic sense reinforces the notion that Retro-Quant will grow to become a multi-billion $ industry sometime over the next several years. There’s simply too much value to be unearthed: human behavioral data, hidden crime (on many levels), genetic/evolutionary patterns, cognitive patterns, etc.
As touch-screen interfaces become more reactive and computers get smarter we’re bound to see faster, reactive, and more forgivable interfaces. Case in point is a new product called Swype that allows users to intuitively swype through various letters on a touch-screen keyboard in a single fluid motion, then statistically calculates what you intended to type.
If it sounds a lot like the next generation of T9 that’s because one of the founders, Cliff Kushler, also invented that huge time-saver. But make no mistake about it, Swype marks a big leap in next-gen productivity. Already garnering rave reviews, it works
“across a variety of devices such as phones, tablets, game consoles, kiosks, televisions, and virtual screens” and lets formerly slow texters achieve input speeds of over 50 words per minute. That’s right – some/most people can’t even type that quickly on a regular keyboard.
Think you’re immune to Google Search? A new effort by the company promises to unearth your embarrassing Elementary School photos, achievements and other data, then incorporate those into the Google brain.
The Retro-Active Quantification Industry, which I believe will grow to a multi-billion $ valuation by 2015, made a big leap forward this week with the release of Google’s News Archive Search.
Many years in the works, the new service/feature allows users to do exactly what it says – search a huge body of archived small-town newspapers that have been scanned into Google’s system, converted from visual to text data using the company’s perfected system (note: they’re also working on a similar but more robust system that will mine text data – t-shirts, street signs, house #s, etc. – from photographs), and then indexed using Google’s world-famous search.
Best of all, Google allows you to view the original scanned images and “browse through them exactly as they were printed—photographs, headlines, articles, advertisements and all”, much like a microfiche in a library basement (remember those?).