The Artificial Intelligence Thread

Discussion in 'Off-Topic' started by narad, Nov 2, 2017.

  1. narad

    narad SS.org Regular

    Messages:
    4,924
    Likes Received:
    1,428
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    After a lot of hijacking another thread, thought it would be good to just start something fresh and dedicated to AI then continue down that path. I mean, it's unlikely that AI is going to become less ubiquitous/less discussed here...

    This is probably a decent place to start:

    "Stephen Hawking says he fears artificial intelligence will replace humans"

     
  2. vansinn

    vansinn ShredNeck into Beck

    Messages:
    2,911
    Likes Received:
    157
    Joined:
    Sep 22, 2007
    Location:
    Cosmos
    Interesting that you should open this topic just as I'm watching an episode on the very topic on Red Ice Radio.

    No doubt AI is already here to stay and advance, and further, will change our civilization.
    For me, the big question is how it will evolve, and whether or not it will have build-in rules like Asimov's basic rules for robots.

    The problem with such rules, as I see it, is that they're based on linear logic, while AI is self-adapting and self-learning, based on things like neural networks.

    As such, how to implement protective rule sets?
    And how to control AI?
    Or, should it be controlled at all, or be left to evolve?
    I mean, the basics for AI is coded by humans, and we aren't exactly doing too good a job at living together in peace and harmony as it is..

    The referred program spends a good amount of time on NEOM, the new advanced city being build by Saudi Arabia, along with a discussion about the Saudis being the first to grant citizenship to a female AI robot.

    I find it scary seeing a country that is only just now about granting driving right to women, wanting to be the leader in AI.
    And further, that the Saudis will not accept discussion about their wanted view on which religion shall be implemented in NEOM.

    Putin perhaps said it just right, that the country being in control of AI will rule the world.
    And I might agree with Elon Musk, who'd donated, IIRC, 10mil $ as funding to make sure AI may be kept openly controlled, that is, actually the opposite, may be kept fully in the open, uncontrolled by any single identity.

    Anyways, already now, we're seeing AI-based products popping up, like software for digital imaging capable of dressing too-nude pictures, or seemingly perfectly converting a black'n'white image to a color version, or rebuild hidden features.

    Soon, we'll have software in guitar products rendering traditional presets a relic of the past, instead adding textures fitting to the music as it's being played.
     
  3. bostjan

    bostjan MicroMetal Contributor

    Messages:
    13,174
    Likes Received:
    1,273
    Joined:
    Dec 7, 2005
    Location:
    St. Johnsbury, VT USA
    Ok, so, I have a huge respect for Prof. Hawking. He's pretty much the man. That said, the media loves paraphrasing him, and the man himself is not perfect (no one is). His Ph.D. dissertation has recently been made widely available, and it has some pretty surprising mistakes in it, specifically when it comes to predictions he made about where cosmology was heading, and it was based off of assumptions he made that he thought were safe, at the time, but actually were eventually disproven. On the other hand, his third chapter was amazing. But that's not the point...

    I mean, this statement, without context, is pretty bland. Powerful weapons have been a reality for nearly a hundred years now. New ways for the few to oppress the many has been the story of planet Earth for hundreds of millions of years. And, honestly, I trust a futuristic AI programmed by good guys way more than I trust a bad guy himself, but still that's not very much.

    If somebody like Kim Jong Un has a nuclear arsenal and a stockpile of ranged missiles, and, say someplace like the USA is developing an AI to determine how to launch nuclear missiles - well, we're fucked. Honestly, if one doesn't kill us, the other might.

    I can guarantee that the military is watching the development of AI self driving cars very closely. They'd be idiots not to. What if a self-driving armoured humvee was set to lead a convoy? It hits an IED, and takes serious damage. You can fix it. With a driver, who loses an arm or a leg or worse, there's no repairing him. If it's AI, then there's no terrible home visit to explain to the AI's widow and little kids that the AI won't be coming home on time. So, AI military vehicles is a given.

    If your AI for your autoautomobile can identify little kids playing near the street as a hazard, and identify a police officer as an authority figure for whom to pull over and stop, then the AI can be put in a battlefield with a gun and trained to shoot bad guys and not good guys, right? Eventually...

    Then comes the "Terminator" scenario. AI taught "this is a bad guy, this is a good guy," ponders some deep philosophy and concludes that all humans are bad guys...therefore extinction is in order.

    Well, that's where things get necessarily fuzzy for me. An AI is not really that different from a person at that point. Maybe smarter, maybe less fearful, but still limited in what it can do, as a function of design and as a function of availability of the necessary resources. I doubt the big red doomsday button is going to be converted to AI. Maybe I'm flat out wrong, but if we can't even talk about driverless cars without people flipping their wigs, then I don't see how the AI all-out nuke button is going to get traction with whomever is responsible for installing new nuke buttons.
     
  4. KnightBrolaire

    KnightBrolaire 8 string hoarder

    Messages:
    4,129
    Likes Received:
    1,318
    Joined:
    Mar 19, 2015
    Location:
    Minneapolis, MN
    Given the speed that the military actually implements good ideas we won't have autonomous mraps or humvees for 30 years lol. Seriously, they've been talking about modifying the m4 platorm or going to a larger caliber round for well on 20 years now. As far as trying to develop autonomous tanks or such, I don't think that's going to happen any time soon. The military is far more interested in a cheaper easier option which is just having the vehicles being remote controlled. There's no need for rampant AI with access to high explosive rounds/nukes then.
     
  5. bostjan

    bostjan MicroMetal Contributor

    Messages:
    13,174
    Likes Received:
    1,273
    Joined:
    Dec 7, 2005
    Location:
    St. Johnsbury, VT USA
    Probably not going to happen with tanks. And I really do think that the timeline for this to happen is a lot slower than what the media keeps implying (which is that it'll basically be in a couple of months, and that in 2-3 years, it'll be super widespread).

    For one, any self-driving cars that rely upon maps and GPS are not going to be worth a damn where I live. There are no accurate maps of roads, and GPS signals are shotty here compared to other places in the USA. Many of the roads here are impassible in the winter, and still more impassible in the spring. Farmer Bob down at the middle-of-nowhere farm doesn't take kindly to strangers, and that includes cars and trucks that drive onto his property, and google maps shows a public road on farmer Bob's property that isn't there.

    Then there is where I used to live - Detroit. It's no secret that there are some crooked cops on duty in Detroit. It's also no secret that there are some neighbourhoods in Detroit in which the radio in your car can get stolen while you're still driving. I imagine a self-driving car in Detroit - trying to avoid potholes deep enough that the locals catch fish in them (no, I'm serious), not knowing how to respond to thieves and vandals, and I have no idea what happens when a crooked cop tries to pull over a self-driving car to write a speeding ticket. In fact, what happens when a car jacker breaks into a fully autonomous self driving car? :lol: What if there's no steeling wheel?!

    In the 1980's, people thought that by 2010 there would be flying cars. In the 1970's, people thought that by 2000, we would be living in outer space. In the 1990's people thought Y2k was going to kill everyone, or that ebola was going to kill everyone, or that the government was going to kill everyone...We make strides toward these things, like drones instead of flying cars, like the space station, where people live in outer space, kind of, and well, ebola is still an issue, but now it looks like the return of the bubonic plague is more of an epidemiology concern than ebola, for a minute.

    So in 2017, we are thinking that AI will take over and enslave mankind. The idea is not new, but it's gaining a lot of ground as far as being taken seriously. As long as people are taking it more and more seriously, it will probably not happen. If you read what Prof. Hawking said carefully, you'll see that he's not predicting that machines will enslave mankind, he's predicting that, if mankind doesn't control itself, machines will enslave mankind. It's a qualification, but it makes a difference.

    The thought is that computer viruses were made by computer programmers sort of by accident, so killer AI will be made by accident, or just be made in general. I think that there's quite a bit of truth in that concern. A computer virus is really not that sophisticated of a program. AI is much more sophisticated, but, if someone got ahold of a sophisticated AI, it is quite possible the AI could be altered in a not-so-sophisticated way to become something that is AI and dangerous.

    So when would the machines take over? 2020? 2030? 2040? 2050? ... I don't think it's likely within our lifetimes. It might be likely for the following generation, though...

    I think we will see huge strides in AI in the 2020's, but then, something else for awhile... then maybe another surge around 2050 or so. Things like this seem to come in 30 or 40 year cycles of interest. Probably the ~2050 AI surge will break new ground no one ever dreamed of. Then, the next surge, it'll get out of control, by then, 2080 or 2090 or maybe later. Heck, by 2080, it'll be a miracle of society is still in tact, though, between the threat of terrorists getting nuclear weapons, the threat of global war as the lessons of the last global war are forgotten, diseases spreading as poverty takes a grip on the common people and people refuse to get vaccines based off of junk science...IDK, those things are a little closer to home.

    I remember back in circa 2000, with the advent of nanomachines, there were tests on some amazing nano technologies. Nanomachines that acted as immune-system boosts to kill off infectious material in patients with compromised immune systems. Nanomachines that could perform mechanical tasks on a tiny tiny scale. The idea was then that a nanomachine could be built to replicate itself - making more nanomachines that could replicate themselves...if such a thing existed and there was no way to turn it off, it could potentially disassemble everything on Earth in order to manufacture more copies of itself. Such a machine was never built, but you might be surprised at how not-too-far-off-from-reality this got...a resurgence into nanotechnology could cause such a catastrophe in the future, but more to our point, if we developed smart nanomachines- that is, nanomachines smaller than what the human eye could ever see, but with processing power and the ability to utilize AI, I think the result could potentially be very frightening if it went wrong.

    Part of what makes me feel that way is that nanotechnology is already very powerful. It could potentially cure HIV/AIDS. It could potentially cure the common cold. In fact, nanotechnology shows more promise at potentially solving the world's epidemics than anything else. AI is going to be a big thing for industry, as it produces more reliable results and saves money. AI+nanotechnology is super attractive to researchers who want to build a universal antivirus. Imagine a nanomachine with sensors and nano AI capable of discovering a thing, identifying it as a threat or not, and then neutralizing it. It could learn what bad things look like and what good things look like and decide to stop the spread of new, unclassified diseases. In 2017, disease is just as scary as ever, at a time when most other threats to humankind have been mitigated. But what if it goes wrong, and we instead end up with nanomachines that determine human cells are "bad." There are a lot of different human cells, too, so it could lead to potentially a new fatal disease that would be untreatable...
     
  6. KnightBrolaire

    KnightBrolaire 8 string hoarder

    Messages:
    4,129
    Likes Received:
    1,318
    Joined:
    Mar 19, 2015
    Location:
    Minneapolis, MN
    I don't know that nanomachines could cure aids necesarily but they have the potential to be both serious tools for healthcare and for warfare/espionage. I think if we could utilize nanomachines as rudimentary macrophages/T Cells (which would be a start) then we definitely have the potential to neutralize pathogens in the future. The problem with aids is that the virus replicates inside the cell and mutates very quickly due to making errors in the encoding process iirc. If the nanomachines targeted the viral capsules before they could begin hijacking cells then that would probably be the most effective method. As far as their usage in warfare, imagine being able to release tetrodotoxin, cyanide or other poisons/paralytics directly into the bloodstream via nanomachines (ie say you throw an innocuous smoke grenade but it's actually aerosolized nanomachines). It would make clearing houses a hell of lot easier if everyone is paralyzed/dead already. Or say the CIA has someone spray a nanomachine mist onto someone's face (or the subject touches something coated in nanomachine slime) which enters their blood stream and allows us to have human bugs/surveillance. I think there's always the very real possibility that something could go wrong with AI/nanomachines but the upsides are worth it.
     
  7. iamaom

    iamaom SS.org Regular

    Messages:
    106
    Likes Received:
    87
    Joined:
    Sep 3, 2016
    Location:
    Texas
    Replication inside cells for viruses are perfectly normal, what makes HIV/AIDS such a PIA is 1. They are retroviruses, so they make their genetic code in the opposite way that 99% of life does which makes common anti-viral techniques are moot, 2. The virus can multiply for years without much issue so it makes it hard to tell when exactly you got infected and who did it, meaning someone who does not know they're infected can spread the disease much easier than a virus that immediately alerts to body, and 3. They take over the hosts immune cells in such a way that the body does not know that the cells are infected, which enables them to spread the infection covertly and without telltale sickness signs like fever.
     
  8. Ebony

    Ebony Mr Sunshine

    Messages:
    234
    Likes Received:
    56
    Joined:
    Jun 7, 2016
    Location:
    Fetsund, Norway
    I think Stephen Hawkins is onto something. The human intellect is futile enough as it is, we don't need a world run by poor imitations of it. And the idea that we will "eventually" be able to produce an incorruptible, perfect machine doing our bidding is certifiable.

    Someone may be able to somewhat successfully create and wield the fantasy-version of AI, but they'll live hundreds of thousands of years from now if not millions and look like Mega Mewtwo Y. And even then I think the technology is going to create just as much hell as some of us imagine it will today.

    And this is all assuming some natural/man-made disaster doesn't throw us back into the stone age and postpone the possibility of this ever happening, which is very likely.
     
  9. narad

    narad SS.org Regular

    Messages:
    4,924
    Likes Received:
    1,428
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    Why would an AI be a poor imitation of human intelligence? Is AlphaGo a poor imitation of a Go champion?
     
  10. Ebony

    Ebony Mr Sunshine

    Messages:
    234
    Likes Received:
    56
    Joined:
    Jun 7, 2016
    Location:
    Fetsund, Norway
    I'm speaking about putting AI in charge of tasks and functions that, at least for now, require true cognitive function.
    I'm sure we can debate what "true cognitive function" is, but given how we still depend on it for most things I don't think we can deny it's existence. Maybe we'll crack that code one day, but how can we possibly hope to replicate it into machines before we do so?
     
  11. narad

    narad SS.org Regular

    Messages:
    4,924
    Likes Received:
    1,428
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    This "replicate it into machines" seems misleading. AlphaGo is a specfic task, but so is ...technical assistance for MS Word 2001... driving person from point A to point B ...finding precedent court cases... etc., we as humans are able to do all of these to some degree, and we multitask thousands of hierarchical things, but I don't see much difference between them except that planning is more extensive and rewards are extremely delayed for most everyday things.
     
  12. bostjan

    bostjan MicroMetal Contributor

    Messages:
    13,174
    Likes Received:
    1,273
    Joined:
    Dec 7, 2005
    Location:
    St. Johnsbury, VT USA
    The nanomachines proposed for fighting HIV are small enough to enter a cell without much trouble. They are much smaller than a macrophage. I suppose they failed in clinical tests, but there are nanomachines that can detect HIV directly from DNA.
     
  13. narad

    narad SS.org Regular

    Messages:
    4,924
    Likes Received:
    1,428
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK


    Thoughts?
     
    StevenC likes this.
  14. narad

    narad SS.org Regular

    Messages:
    4,924
    Likes Received:
    1,428
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    Cute video of a serious concern:

     
  15. bostjan

    bostjan MicroMetal Contributor

    Messages:
    13,174
    Likes Received:
    1,273
    Joined:
    Dec 7, 2005
    Location:
    St. Johnsbury, VT USA
    Interesting that it came up with new opening move sequences...but...if it wasn't programmed with human patterns, why did it already know those and then discard them? Just a nitpick.

    Also, totally unrelated to the intellectual content, by why in the name of Cthulu was there a finger or blurry shoulder or whatever obscuring the video half the time...:noplease: If I was editing, I'd have to do something about that and either reshoot the clips or create an AI program to edit that out. :lol:

    The killer drone video reminds me too much of Robocop 2. :rofl: For a minute I thought it was real.
     
  16. narad

    narad SS.org Regular

    Messages:
    4,924
    Likes Received:
    1,428
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    The human patterns were more like milestones of its learning -- so it picked up some human strategies which were effective at one point during self-play, but eventually were discarded as other more novel strategies were discovered to counter them. Because this is self-play, early on it's novice vs. novice, and many simple strategies may yield big payouts, but as it advances these become ineffective.

    Yea, there is a very similar Black Mirror episode too.
     
  17. bostjan

    bostjan MicroMetal Contributor

    Messages:
    13,174
    Likes Received:
    1,273
    Joined:
    Dec 7, 2005
    Location:
    St. Johnsbury, VT USA
    Ok, yeah, that's clear now. Even more interesting. Maybe soon there will be similar for chess, which might have a result more up my alley. Not that I never played "go," but it's been years and I was never at all serious about it. When I was a kid, I played chess in a tournament once. It was enough for me to go from feeling really confident playing my friends and teachers in school to me feeling scared and intimidated by real players, so I didn't pursue it further at that time, then repeated the entire experience stupidly as a young adult to the same effect...but anyway, I'm quite a bit more personally invested in chess, and I think it will come.
     
  18. narad

    narad SS.org Regular

    Messages:
    4,924
    Likes Received:
    1,428
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    But Chess is not a great task for AI IMO, since human Chess strategies involve more lookahead, which is more brute-force and something machines are inherently better than humans at. Precisely why we've had super-human Chess bots for decades. I remember reading Kasparov (IIRC) mentioning that expert Chess players can lookahead 15-20 moves at times. While this is impressive, and amazing to train your brain to organize all that information, it's not intelligence. For this reason it's kind of funny to have the Western cultural association with Chess and intelligence.

    In contrast, Go involves some lookahead, but having a sense of what board states are good -- that inexplainable "gist" -- is more to the essence of Go. It's interesting to see expert Go players providing commentary mid-game and explaining particular regions of the board as + or -, but not having a clear assessment of the lead.
     
  19. bostjan

    bostjan MicroMetal Contributor

    Messages:
    13,174
    Likes Received:
    1,273
    Joined:
    Dec 7, 2005
    Location:
    St. Johnsbury, VT USA
    Intelligence noun - the ability to acquire and apply knowledge and skills.

    Are you implying that chess doesn't require knowledge or doesn't require skill?
     
  20. narad

    narad SS.org Regular

    Messages:
    4,924
    Likes Received:
    1,428
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    Well let's forget about acquiring, since we're talking about intelligent behavior -- what does a good Chess strategy look like?

    I'm in a state, and I want to know what move I should make. For some set of m moves I think might be good, I look n steps in the future, each time assuming the opponent makes is playing their optimal move. For each of these, I wind up in some new state. For all of these states, go through and do a weighted count of my pieces vs. their pieces, where high priority pieces contribute a higher weight to the score. For whatever state is the highest scoring, take the action that leads me to that state.

    Did this need much intelligence? Not really - it needed a lot of mechanical power. I needed to go m^n imagined states into the future, so I can't explore all states and I have to use my brain to prune away bad choices when they start to feel suboptimal, but I really don't need much of a brain to do it -- just search. Expert Chess players are great searchers, but machines are incomparably better, and so Chess was solved quite early.

    Let's apply this to Go. First, I have exponentially more moves I can make. This greatly limits n, so I can't search that far ahead. And when I do search far ahead I wind up in these new states I need to assess but ...WTF, these are nonsensical. Whether these configurations are good or bad will only become clear toward the end of the game, so now I don't know which state is good, or which action to take. I can't brute force it, I need to have something akin to intuition, regarding whether my state is good or bad. This is what AlphaGo has learned, and why it appeared on the cover of Nature, whereas the self-play Chess system will not.
     

Share This Page