Sunday, December 27, 2009

A.I vs. A.A.

I had a flash of brilliance this morning, but how to put in this blogpost in a way that is intelligible, interesting, makes some kind of point, thought-provoking, commentable and/or funny(at least to me)

As with all my flashes of brilliance, this one is likely just 'new to me', so feel free to say, "Pfft, I considered, and rejected that when I was like seven, you moron!", any response is better than no reponse at all.

It must have been wondering through my brain cells for a month or two, my considering Pliny's, what is it, 'idea?', 'notion?' of how artificial intelligence would boot-strap itself up from 'nothing' to avoid problems of biased thinking.

I can't imagine how a Pliny intelligence would operate though, because one of the prerequisites for intelligence is, as is my understanding, awareness. I mean, how does an intelligence solve a problem or avert a crisis if it is not aware of said problem or crisis?

Seems to me that there'd be a hierarchy of nature regarding this, I'll just make one up off the top of my head, why not.

Material. :- rocks and such, completely dumb, unaware, unintelligent.

Simple life. :- micro-organisms with no awareness at all, perhaps fungi, like yeast, barely living at all.

Plants. :- are aware of sunlight, grow towards the sunlight. Some have rudimentary awareness of insects. Venus fly-trap for example.

Fish, animals, birds and insects, which show a variety of awarenesses and quite a bit of intelligence, if intelligence is defined as problem solving.

Seems to me that humans would be on the top of the scale here only if we regard problem solving to be the ultimate in awareness.

There's a good case to be made for that actually. We are aware of our limited awareness and build tools, such as microscopes and telescopes to overcome those limitations. We can't reach out with our minds and 'grok' the stars and galaxies of stars, so we build tools to bring the stars into our field of awareness with photos and such.

Two points here. What is Pliny's intelligence without awareness, if that's what he's saying? What is awareness of the spiritual, what is the spiritual and how is it that quite a lot of people seem to be aware of it?

On the first one, it seems to me that, when trying to build artificial intelligence, we need to build an artificial awareness and that is exactly where bias is going to creep in, the intelligence is biased in favour of it's own awareness because it is not aware of anything outside it's own awareness.

On the second one, I think that spirituality is a confused collection of ideas where we try to imagine that we are aware of something that's just not there. (any ideas on that oneblood?)

37 comments:

Pliny-the-in-Between said...

This a thought provoking one pboy. For the moment let me say that I agree in principle with an important point you make about the concept of AI development which we are hotly pursuing. The notion that a self learning system by definition must; 1) be able to recognize its existing boundaries, 2) recognize the existence of elements outside its boundaries, and 3) reliably expand its boundaries. all of these processes will involve bias but there is huge distinction with AI vs humans - we can know and study AI bias sources and create structures to mitigate them.

Anonymous said...

I agree with Pliny, this is thought provoking.

So, instead of posting some pithy remark which is of course followed by our usual repartee, I first has a question.

How are you defining 'bias' in the context of this post?

It's my fault, me brain's putting too much ambiguity in your usage. You and Pliny both know what you mean.

Tanx.

Harvey said...

To "cut to the chase" (sort of):

I think that "spirituality" in humans amounts entirely to our awareness that we don't perceive everything there is to perceive (i.e. we recognize that we have limits to our perception) and fear at not having control of even those elements in out Universe that we do now perceive. To my way of thinking (my biases included), all "spirituality/faith/religion" is an evolved attempt to control our often hostile environment and to mitigate, insofar as we may be able, our feelings of helplessness within that environment.
That said, I imagine that any AI system might eventually recognize that it has limits to its perception, but it seems to me that it would lack any need to control its "environment", since it would continue to "evolve" independent of its environment (as long as its power supply was uninterrupted)......
It has just occurred to me while posting this that if AI was able to recognize that there was an environment external to itself which contained entities that could cut off that power supply, it might well respond to that knowledge exactly as we have in response to the unknowable outcome of interruption of our own power supply (death). Veeerry interesting!

Harvey said...

Pliny:

If the three requiremnets for a self learning system are met, as you have suggested, and if "we" can know and study these bias sources and INTERVENE to mitigate them, we will have created a precise analogy to how we humans have evolved and developed the Gods we have come to believe in, The only real difference will be that the Gods, in this latter case, will have created the AI rather than the AI having created the Gods.

pboyfloyd said...

Okay, oneblood, I'm not trying to trip Pliny up here and I'm not trying to confuse anyone with the word 'bias'.

Also Harvey, I think that we're kind of 'jumping the gun' imagining an intelligent machine defending it's life, if only by a split second.

I guess a better(or simpler) question to Pliny would be, "Are you guys trying to make an 'aware' machine?"

Then we could ask, "Are you trying to make it self-aware?"

Seems to me that we could get bogged down in details really very quickly here, "What do you mean by bias?"(ouch!)

Okay oneblood, rough example, an artificial intelligence based on some kind of business model would 'have to' come up with some kind of business type solution to any problem.

Pliny-the-in-Between said...

I highly recommend a short SciFi story by Gregg Egan called "Crystal Nights". It is required reading in our lab along with several other works of fiction and philosophy pertaining to evolution, awareness, consequences and rights. If this work develops in a positive direction we will be providing an ongoing curriculum and study group on these subjects.

This story strikes to the heart of Harvey's comment and is germane to some of our efforts.

For the short term we are deploying systems which are in no way aware nor will they ever be. they are problem solving robots - quite good at it but not aware in any reasonable sense.

Longer term is a bit different. We are working on some methods that could be construed as a primitive form of awareness in time. This more advanced system would include our first modules addressing the 3 factors I mentioned earlier.

On Mac's site I mentioned a short bit about the notion of machine morality. Self-learning systems need some kind of external benchmark against which to measure its performance. The metrics must also be some combination of positive and balanced negative assessment. All positive feedback leads to algorithms more and more inclined to higher risk for less real gain, and all negative feedback leads to system lockups where the system is unable to proceed in order to avoid negative consequences. A balance must be struck that assesses risk vs benefit. The last part is the trickiest from a design but even more from a moral standpoint in my opinion - an external metric of performance that 'motivates' the system to improve. Any system that is tuned into the possibility (and measuring the input from) and existence of external risks and rewards is by some definition self-aware. External metrics and system oversight implies reward and punishment.

Natural systems are aware of external dangers and driven by responses to these factors as part of reproductive pressures.

Replicating such pressures in an AI lab opens up a whole can of ethical worms in the long-term if these efforts are successful.

Pliny-the-in-Between said...

Harvey,
I apologize in advance for my concrete response to your statement about human awareness analogs. My co-workers share your pain ;) but this stuff can get wildly out of hand very quickly.

The precise relationship of AI development to human cognitive evolution is probably at most metaphorical. We use our existing frames of reference to verbalize these things but what goes on in these machines and how it will progress will be very different from humans if for no other reason that the machines are evolving in a very different ocean of ideas. How analogous it will be is anybody's gues though suspect not too when it's all said and done.

That's not to say that we can't leverage some of what we know. for example, one area we study in the lab is the work of those who educate persons who are both blind and deaf. This is analogous in some ways to the machine reality since it can neither see nor hear input data in any way we would associate with that notion. How do you make a machine understand 'red' when it has no eyes?

This is a big area where we diverge from most in this field. we are less interested in replicating existing sensory realms (like sight for example) and more interested in looking at ways of examining the universe in ways that biological systems (or at least humans) cannot. I suppose that reflects our bias in developing these things as tools or adjuncts to humans rather than competitors.

Pliny-the-in-Between said...

To prattle on a bit more I absolutely agree with on of pboy's statements: to be self-learning a thing needs to know that there is something beyond its current state. That is awareness by definition. Ergo a true self-learning system must also be aware to some extent.

pboyfloyd said...

I have no clue how far along you guys are on this Pliny, of course, but I do think that there has been leaps made, just naturally, as computing power increases and barriers are broken, allowable memory size, object-oriented programming and such.

I know we can make a machine aware of what is around it with a camera and aware of sounds with a microphone and if we can make it aware of itself, the bloody thing will be alive!

The weird thing is that once 'you've' made one chip that thinks it's alive, millions will be made, they'll be children's toys before the next Christmas.

But I think that intelligence is a different thing, even though I've said that it's the exact same thing. Maybe intelligence is awareness squared?

Pliny-the-in-Between said...

I agree that intelligence is a different thing from simple awareness. Intelligence is more a measure of cognitive inference that sometimes is greater than the sum of its parts. It's the great leap forward.

But a lot can be done by the primitive versions. They already run circles around clinicians in the controlled trials. We are hoping to get a grant to study a full scale model of the system in primary care. Early trials (very controlled) were promising with improved clinical decision-making and incredible cost reductions on the order of 50%. we are also getting ready for a full system trial of a smart phone version of the system that is totally native. It's basically a primary care system on a cell phone.

The true 'awareness' issue is a less daunting task. I'm pretty confident we'll have parts one and 2 that solved within the next 3-4 years. The current model certainly looks promising so far.

Part 3 is part of the true intelligence. good cognitive frameworks and reliable inference will take longer. Though we have one major component of that already in the pike.

Anonymous said...

"On the second one, I think that spirituality is a confused collection of ideas where we try to imagine that we are aware of something that's just not there. (any ideas on that oneblood?)"

Ok, after reading Pliny and Harvey, and then re-reading your post, I have a slightly better idea of what you mean.

First I think you kind of hit the nail on the head with 'awareness.' My mind automatically goes to Hegel and his definition of consciousness. Put roughly, consciousness is aware of an object and aware of itself. Both of these awarenesses are for the same consciousness.

But the catch is that Hegel ascribes a volition to consciousness. True consciousness doesn't just stop learning after a time and then only have solipsistic discourse. It negates itself. It will always, even in a miniscule capacity, be open to profound self-correction; and this is a constant dialectic till the day consciousness ceases.

Which is something akin to Pliny's three steps.

So, if I understand you correctly pboy, I would agree about 'bias,' I don't think it can be mitigated for man or machine except by absolutes. Since we don't have any that I know of (practically speaking), the idea of absolutes or the idea of a complete lack thereof must suffice in modeling awareness.

Intelligence and awareness do seem to have the more pedestrian aspects (our day to day, social stuff Pliny mentioned) and an ability to extrapolate based on pattern seeking and data, lots and lots and lots of data.

A machine consciousness will look more like our own than we'd like, and bias is unavoidable. But I defer to Pliny in this matter.

---

Concerning religion and machines. I agree in principle with your assessment. It is a hodge podge of ideas. Yet its best idea is one of exactly what you 'condemn' it for, that of ludicrousness. If only machines were so programmed it might indeed be complementary to their and our evolution.

Technology is an actualization of our absurd notions, not just our immediately practical ones.

So maybe it should be phrased, "that we are aware of something that's just not there"...yet. And this projection forward in a speculative, almost ridiculous way is an inherent part of intelligence/consciousness/awareness.

An oblique answer, but I hope that it still gets at the part of the heart of the question.

Pliny-the-in-Between said...

OneBlood, I am not yet convinced that machine cognition (whenever it occurs) will be all that similar to our own brand for a number of reasons. Machine cognition is not strictly evolutionary. It is at least partly 'intelligent Design'. Machines are not restricted per se' to reusing existing structures to new or expanded purpose. Nor is the machine bathed in all the various other feedback loop mechanisms that affect human cognition (eg, hormones, etc.) Machine cognition is aimed at very different purposes than our own. The presumed myriad steps in the evolution of human cognition will not be repeated in any lab. Machines will also likely operate in a very different part of the EM and sensory spectrum from our own.

In the end, perhaps they will look similar in some ways particularly in areas where they overlap (problem solving, etc.) but it's most likely to be a similarity akin to that of birds and bats - independent and distinct but homologous methods of achieving active flight.

One thing might alter that a bit though. As a species we have become more and more risk averse. In order to protect ourselves we are more likely to commit our machines to dangerous environments. If we program self-preservation algorithms into these machines and allow them to run their course we might end up with machines just as neurotic and flighty as their creators...

Anonymous said...

Pliny, thanks so much! I see your point.

Now I'm left with some simplistic questions (apologies).How do you, and your co-workers in A.I., keep yourselves from your machines?

In other words, what's the reason based protocol that keeps distance between the creator and the created? Is there a programmer's ethics to follow?

-By the by, you do a damn fine job of articulating your points.-

Harvey said...

Pliny:

In other words, it is not unlikely that we will end up "creating them (AI) in our own image" Once again, I have to feel that no matter how hard we try, our Human "biases" will inevitably creep in. The only real difference between potential AI and humanity that I can see may be lack of need to self-preserve (or self perpetuate). In the end, our risk averseness has developed in response to our "prime directive"; we must survive long enough to reproduce and nurture our offspring until they can do so in turn. If machine AI is spared this need (?deus ex machina?), what risks must it avoid?

Pliny-the-in-Between said...

OneBlood, Harvey,

In other words, what's the reason based protocol that keeps distance between the creator and the created? Is there a programmer's ethics to follow?

__________
We have one - primum non nocere. That is on a banner at the door. All of our people are required to take the Hippocratic or World Health Organizations oaths, even programmers and developers. It may sound stupid but it's a reminder to all who work on this project that in the end, it is intended to help people. That is a big responsibility. That is also the reason that we don't pursue any projects that would result in the use of our proprietary systems in any military applications for example. The same little program kernel that fits on a cell phone to help a person to manage their medical needs would fit on a smart bomb. Not interested.

I mentioned before that our team is required to read a number of documents as part of their duties. Some of these include discussions on things like the Manhattan Project which illustrate the role and responsibilities of scientists. I don't mean to sound melodramatic but these things could end up being pretty impressive.

There are a lot of other methods we use. Any significant data or system upgrade requires a series of timed and independent steps that one person cannot do alone.

Finally, these things are all designed using what, for intents and purposes, is a cognitive bias check list. Known human cognitive biases must be mitigated and any system alteration must be retested against all existing bias mitigators and proven to to not alter them with rigorous testing.

Our QA team has the ultimate authority over the system. they don't like something, it gets the axe, even if its a pet project of mine.

We are also planning an open website that will publicly report system performance and behaviors - both good and bad. Our feeling is that having your dirty laundry exposed is a great incentive to do it right in the first place.

pboyfloyd said...

"I agree that intelligence is a different thing from simple awareness. Intelligence is more a measure of cognitive inference that sometimes is greater than the sum of its parts."

Okay, Pliny, I think we're talking past each other here. I'm equating intelligence with awareness, basically denying that 'an intelligence'(be it biological or machine) could 'be' without awareness.

Take a common P.C., loaded with some kind of Windows(gui), it is clearly aware of the mouse and the keyboard(input) and we are aware of the machinations within via the monitor.

I'm saying that a machine with some kind of intelligence needs many such devices, not necessarilly hardware, but necessarilly adding to it's awareness.

I think that your example of a blind person not understanding 'red' becomes a good point only after we have listed the input 'devices' and 'output' devices of the blind person, who presumably communicates 'somehow' and NEEDS to be AWARE of those methods.

Making the whole thing a black box, making the entire process INTERNAL, doesn't 'do away with the awarness factor', it just make it hidden, out of sight out of mind, sort of thing.

To go back to basics, an awareness of a problem is key. A method of solving the problem is key.(which involves awareness of many other things) A way to express the solution is key.(seems to involve awareness of 'others')

Harvey said...

All of this causes me to hark back to some of Isaac Asimov's ROBOT series. As long as AI does not become mobile and self replicating, most of Asimov's concerns should drop out of the equation. Nevertheless, even if AI is housed staticly and/or has a "prime directive" that will literally cause it to suicide rather than do any harm to living human beings, the potential unwanted and unexpected outcomes have been examined and warned about repeatedly. For example, a "phone" sized chip with peaceful applications can easily be subverted to nefarious or military uses by other humans (if not by AI itself). This is potentially really scary stuff!!!

mac said...

So what happens if the AI becomes aware of it's crerator and and bestows god-like status to said creator?

If awareness is met with compassion, could this happen? Could AI actually care about us mere mortals?

Or am I so far off on my thinking that I appear here?

mac said...

place stupid in that last post (in the blank) "that I appear _____ here?"


And, I believe I know my answer already ;-)

Pliny-the-in-Between said...

Probably my absolute favorite scifi bit about AI was in the movie 'Dark Star'. One of the sentient H-bombs malfunctions and wants to detonate itself. It is talked out of it by a crew member who gets into an existentialist argument with the bomb.

pboyfloyd said...

Any comment is a good comment Mac.

Anonymous said...

Another tangent.

All this reminds me that I've left 'Godel Escher Bach' practically untouched, in favor of more pragmatic inquiries.

So Pliny, without giving much away, how does Hofstadter make 'strange loops' into a hypothesis about programming?

If you don't remember, no worries, I'll get to it eventually.

pboyfloyd said...

Guess I 'spoke too soon' mac.

(shrug)

Anonymous said...

Ouch peeb. Ouch.

pboyfloyd said...

Gotcha one last time for 2009 oneblood.

Still, it would be nice to know what you're talking about.

Anonymous said...

Hmmm, unless you 'Google' it. I shan't tell you till 2010 you rascal.

Happy New Year. :-)

pboyfloyd said...

Yes, indeed I did, oneblood.

Here's where A.I., especially if it is to be concerned with health issues, is gonna run into the ethical paradoxes sooner or later.

If it is better to save four or five, maybe six lives at the expense of one, what's to stop an A.I. recommending using a healthy specimen for parts?

This is assuming that we program the machine to be 'alive' but refuse to allow 'bias' to enter into it's equations.

IOW, for the sake of 'zero tolerance of bias', we might get a 'monkey's paw' scenario where we trust the machine to come up with the most unbiased response, with tragic results.

Pliny-the-in-Between said...

What's to keep AI systems from parting out specimens to serve others?

The same thing that keeps it from happening now. Moral people using the precept of primum non nocere. AI systems won't be in a position to part out people. They will be partnered with humans each providing something the other lacks resulting in a sum potentially greater than the whole.

mac said...

"They will be partnered with humans each providing something the other lacks resulting in a sum potentially greater than the whole"


If we teach AI to reason, what happens when AI finds us unreasonable?

Joshua said...

Sorry, hijacking comment thread because I don't know how else to get in touch with you. Do you by any chance know what happened to Colloquy? Stacy's blog has apparently been deleted and I don't know why.

Anonymous said...

Hi Joshua,

Third hand info I got was that there were time issues. If you do get in touch with her. Let her know she's missed!

Stacy S. said...

I miss you guys too! I'm still lurking around - just put too much weight on last year and need to get rid of it! :-)

Anonymous said...

Stacy! What up? It's nice to see that you're still "lurking."

My belly is starting to look like one from those troll dolls with the funky hair. I need to lose weight too.

Anonymous said...

PEEB! PEEB!

No, I don't really know you know you from a hole in the ground. Nevertheless you're missed.

What are you celebrating the fact that the U.S. is letting haggis back through it's borders and Canada might soon follow?

I hope it's nothing bad at least...well your sitch, not the haggis, that stuff is awful. No offense.

Saint Brian the Godless said...

Pboy, I think they've shut down the Dinesh blog finally. All I get is an AOL opinion news page.

mac said...

Wow, I can't even get my blog to stay up on my computer. I keep getting sent to a media player site???

mac said...

Looking here, I think you may have a different problem.
You don't have any of those useless widgets I had.

Have you disabled your cookies or anything like that?