Anyone on here particularly religious?

Discussion in 'Politics & Current Events' started by Hollowway, Jul 24, 2017.

  1. marcwormjim

    marcwormjim SS.org Regular

    Messages:
    1,679
    Likes Received:
    822
    Joined:
    Jan 15, 2015
    Location:
    Not here
    Off-topic, but would you describe yourself as being inclined in any of the subjects you capitalized? I have no follow-up questions.
     
  2. CrazyDean

    CrazyDean SS.org Regular

    Messages:
    953
    Likes Received:
    184
    Joined:
    Dec 7, 2010
    Location:
    Columbia, SC
    I'm actually impressed with how well you stated this.
     
  3. bostjan

    bostjan MicroMetal Contributor

    Messages:
    13,403
    Likes Received:
    1,458
    Joined:
    Dec 7, 2005
    Location:
    St. Johnsbury, VT USA
    Advances in AI produced by humans as evidence contrary to the intelligence of humans? I disagree. I think advances in AI are examples of how intelligent humans can be.

    Intuition based on the first guess might not be too accurate, when it comes to any new information. Intuition based off a series of clever problem solving scenarios, though, usually is, which is exactly how Relativity came about. Quantum theory was essentially developed out of several overlapping creative solutions to somewhat related problems - but still both theories and how well they have predicted things on the scale of physical size totally out of touch with what humans experience in everyday life, to me, just stand as a testament of how clever human beings can be.

    The fact that humans can sit down and think rationally about a problem long enough to come up with such a surprising solution, and then to congregate the surprising solutions to complex problems into encompassing theories.
     
  4. Stuck_in_a_dream

    Stuck_in_a_dream SS.org Regular

    Messages:
    1,027
    Likes Received:
    121
    Joined:
    Apr 6, 2011
    Location:
    Austin, TX
    Well, AI is how I make my living during the day so I know a few tings about that. Humans (in their own form of intelligence) probably have no match in probably 50 light yrs radius, I give you that. But we are on the cusp of producing AI implementations that will surpass our own capabilities both in scale and sophistication. Yes, we can brag about it then to the chimps, but who cares :), no one is watching or keeping the score.

    Recent advances in AI: Well, don't take my word for it, but what you hear about in the news, some of it is very real.
    Examples:
    1. AI algorithms can be creative & imaginative, e.g. DeepDream, or the Facebook bots which came up with their own language to optimize communications between themselves.

    2. Vision, simple task for a human (the brainy part of it), but we got beat by ResNet in 2015.

    3. Games anyone? The Stockfish app on your phone will defeat Chess World champion like it's nothing, Stockfish is like 500 points higher than Magnus Carlsen, who is a mere 40-50 points ahead of 2nd in the world. Yes it's a different type of intelligence but when applied correctly, by future AI, will enable future machines to surpass human capabilities by light years. Sometimes it's the sheer scale of a simple application can outperform the wittiest of apes :)

    For a more human-like intelligence, Actually one of the devs who joined Google's Deep Mind a few weeks before Alpha Go's first victory (against the European champion) did his PhD research to develop a Chess engine (Giraffe) that learns how to play chess by analyzing (i.e. watching) previous games. It reached an IM (international master) level in 72 hours! You can probably download the code and run it on your computer, and test it yourself.

    4. Games (cont'd) Forget simple Atari games, where AI machines learned by observing how we play, and then beat us at it. Openai just developed a bot that beats humans at DOTA2, here: https://blog.openai.com/dota-2/

    5. Google/Telsa: driveless cars, that was thought to be way ahead of us, not any more.

    5. Language understanding, reasoning & inference: Still behind human levels, but making very fast strides. Comprehension understanding, question-answering is a heavily active area of research. Deep Neural network architectures performance is actually approaching human's in one of the tasks, see https://rajpurkar.github.io/SQuAD-explorer/

    So, I'll give it another 5-10 years before we can see super-human level bots at our service, or maybe not ;)
     
  5. bostjan

    bostjan MicroMetal Contributor

    Messages:
    13,403
    Likes Received:
    1,458
    Joined:
    Dec 7, 2005
    Location:
    St. Johnsbury, VT USA
    Well, AI can be very effective at beating a human at a game when the rules are simple. As the rules get more and more complex, humans have a higher and higher rate of success against AI.

    But, there is not really a strict definition of overall intelligence. AI has the possibility to contain pieces of intelligence implemented from all sorts of different people. So, a chess program that has been taught every trick in the "how to play chess" book can hold its own against a high level professional chess player. As we approach the complexity of real life, though, there are a lot more rules, and problems become better solved by intelligence that works at higher and higher levels. I'm simply not convinced that AI is as good at this sort of thing as people on this board continually tout. I've seen some AI programs that are supposedly coming up with their own speech patterns and conversation conceptions and their own languages, and while I am thoroughly impressed, it's nothing compared to what a human can do, presently.
     
  6. Drew

    Drew Forum MVP

    Messages:
    26,534
    Likes Received:
    1,972
    Joined:
    Aug 17, 2004
    Location:
    Somerville, MA
    Not to involve myself in this one, but two comments:

    1) Computers have consistently outplayed chess grandmasters for some time now. We lost that fight.
    2) Since I share relevant XKCD comics whenever the opportunity presents itself:

    [​IMG]
    https://xkcd.com/1875/
     
  7. narad

    narad SS.org Regular

    Messages:
    5,022
    Likes Received:
    1,592
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    That's not how modern AI works though. There's no hand-coding of strategies in something like AlphaGo, and the more recent versions work not by watching human experts but rely more, if not entirely, on self-play. The strength of the model lies in its superhuman ability to score states of play, which is itself quite an abstract task. Old Chess programs could analyze the state of the board by counting pieces and other quick heuristics, and then did a lot of brute force search, well beyond what the human mind can explore, giving rise to intelligent behavior but driven by very un-human-like means. In contrast, it can be quite difficult to look at pro Go board early/mid-game and understand who has the advantage, and impossible to look very far ahead in Go's state-space.

    Getting into "higher levels" of intelligence is just too messy to try, but at the minimum, you can't judge modern AI by old examples and old methodology. I'd feel the same way if I was looking to Deep Blue when thinking about what jobs are going to be replaced by AI.
     
  8. TedEH

    TedEH Cromulent

    Messages:
    4,009
    Likes Received:
    507
    Joined:
    Jun 8, 2007
    Location:
    Gatineau, Quebec
    I'm a game programmer as a profession, and I can say pretty confidently that game AI is not very smart. It's mostly smoke and mirrors- honestly I'd call a character in a game that "feels smart" to be more a reflection of the intelligence of the people who implemented it rather than any indication that it's actually intelligent in any way.

    AI is, in my opinion (at least insofar as games are concerned, but other kinds of AI are probably similar), in a state of "cleverly implemented, but not actually smart". Even something like machine learning is not really what I'd call smart. It's all just rules pattern recognition. Computers have always been better than us at certain things, but they still don't "think" in any sense of the word. The follow instructions, and they follow them precisely. That's all computers do. The rest is human cleverness to make things look smart.
     
    zappatton2 likes this.
  9. narad

    narad SS.org Regular

    Messages:
    5,022
    Likes Received:
    1,592
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    I'm sorry, this is super wrong. Check this out:

    https://deepmind.com/blog/agents-imagine-and-plan/

    Planning, with delayed rewards, no specified rules, no instructions.

    Or similarly, this (limited planning):

     
  10. TedEH

    TedEH Cromulent

    Messages:
    4,009
    Likes Received:
    507
    Joined:
    Jun 8, 2007
    Location:
    Gatineau, Quebec
    It's a piece of software running this, and the problem of winning arcade games is still a simple one. The machine was trained with very specific inputs to solve a well defined problem. It's still more artificial than it is intelligent.

    Software is nothing BUT rules and instructions. If you put enough rules and instructions together it starts to look like magic, but it's still rules and instructions at the lowest level. Machines don't think up solutions on their own- they have to be given specific rules and iterate until it finds a solution.

    Edit:
    And again, my comment was directed at mostly video games. Video game AI is a lot dumber than a lot of people think it is.
     
  11. TedEH

    TedEH Cromulent

    Messages:
    4,009
    Likes Received:
    507
    Joined:
    Jun 8, 2007
    Location:
    Gatineau, Quebec
    The Montezuma video actually illustrates my point- they explicitly say in the description that there were at least 100 million iterations the game had to try to get to this point. The software didn't just pick up the game and immediately do this.

    Edit: Sorry, I interpreted that wrong - they meant 100 million frames of video or something like that, not frames of reference, I think. But still thousands of iterations of gameplay, regardless.
     
  12. narad

    narad SS.org Regular

    Messages:
    5,022
    Likes Received:
    1,592
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    So what's a "rule" in the example of these games?

    And yea, I agree that the AI put into video games is very dumb, but it's so far from state-of-the-art as to not be comparable at all. Really we shouldn't call these things by the same name. When you play video games you don't want to play against a good AI.

    But you know in the above examples that, unlike game AI, it's actually able to play new levels it hasn't seen? It's not just brute-forcing some set of commands until it finds a solution, i.e., the sequence of keystrokes to solve a particular level. It generalizes. What is intelligence but the development of strategies and the ability to apply them effectively to new situations? I don't see the downside of how many iterations it took to train the model -- it has to build up a lot of stuff from scratch.
     
  13. TedEH

    TedEH Cromulent

    Messages:
    4,009
    Likes Received:
    507
    Joined:
    Jun 8, 2007
    Location:
    Gatineau, Quebec
    A game is ALL rules. Input = particular movements. Hitting an enemy causes death (negative result). High scores are valuable. All of these are very clearly defined rules of an arcade game.

    It kind of is doing that, though. The initial training basically has to try out everything available to it, and establish patterns. The rest of the "intelligence" is just applying those patterns to the new input. Eventually it will fall on the pattern of "if (enemy) -> avoid it so you don't die". The pattern is complicated, but it is just a pattern. You wouldn't be able to take this same "intelligence" and apply it to any other situation. If you swapped out the game, or changed it's interface to the game, or drastically changed any of the rules of the game, then the training becomes useless and the program will no longer be any good at it without going through all the iterations again.
     
  14. TedEH

    TedEH Cromulent

    Messages:
    4,009
    Likes Received:
    507
    Joined:
    Jun 8, 2007
    Location:
    Gatineau, Quebec
    And I know you'll say "but the article says it was given no rules to the game!" but that can only be partially true. The software doesn't need to know all of the rules to reach the end result, it just needs a goal, and feedback as to whether or not it's patterns are getting any closer to that goal. That goal is, in itself, a rule that the software is following.
     
  15. narad

    narad SS.org Regular

    Messages:
    5,022
    Likes Received:
    1,592
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    Actually the models that are trained on all Atari games performs better on games that are similar to other games, as some of the representations and basic strategy ("patterns") is transferable across games. It doesn't need all iterations again.

    And when I play video games I also use "patterns" of "there's a guy, shoot him" / "duck, run away, don't get shot", etc. So I don't find the idea of having an interpretable pattern to be the sign of a lack of intelligence -- more that your strategies should be general and robust to simple variations in goal and environment. The brittlest patterns are, of course, the hard-coded rules of basic game AI, and why that is a poor example of the kind of more modern AI that is threatening jobs.
     
  16. TedEH

    TedEH Cromulent

    Messages:
    4,009
    Likes Received:
    507
    Joined:
    Jun 8, 2007
    Location:
    Gatineau, Quebec
    Except that this isn't the "AI is taking our jobs" thread - cause I agree that technology is a threat to jobs - but this is the "we are living in a simulation" thread, which at one point was suggesting that AI is getting somewhere close to real, human-like decision making, consciousness, etc., which it's not.

    I maintain that, as advanced as AI is, it still boils down to patterns, rules, and instructions. Computers are literally incapable of doing anything other than following the precise instructions we give them. The speed it happens at lends to the illusion of intelligence, but it's still an illusion, in the sense of comparing it to real intelligence.
     
    zappatton2 likes this.
  17. narad

    narad SS.org Regular

    Messages:
    5,022
    Likes Received:
    1,592
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    Fair enough -- I did get them confused since AI is not really the topic of either of them!

    I think this is a false dichotomy. Discovering patterns is an important part to intelligence, be it in a machine or animal. Planning to accomplish a goal is an important part of intelligence. It doesn't matter if ultimately these are implemented as machine instructions or neurons firing -- they are abstract skills. These are precise instructions at the lowest level, but focusing on that is at the expense of appreciating what they're doing: learning complex and generalizable behavior from nothing but an environment, a goal, and exploration. This is already further along than basic life, so unless you don't accept evolution as the origin of human, I find it hard not to connect the dots.
     
  18. TedEH

    TedEH Cromulent

    Messages:
    4,009
    Likes Received:
    507
    Joined:
    Jun 8, 2007
    Location:
    Gatineau, Quebec
    I guess my point is that they're only really simulating a very small part of what I would consider to be the kind of intelligence that people and animals have. A machine doesn't understand, it doesn't feel, it doesn't reason, etc. People do those things on behalf of the machine, then provide it as the context to make the computer do something that looks like learning. The machine is learning on an abstract level, but it's not learning on a literal level, or a human level. If you equate the brain with an advance computer (I don't), then we've essentially been able to accomplish very rudimentary forms of only a small subset of what the brain-computer does. That's not to take anything away from the advances in AI that are happening, it's great progress, but to me it's not intelligent yet. Not really. Not in terms of what I understand intelligence to mean.

    I don't know what you mean by this. Very basic forms of life that are, on an abstract level, capable of less than our machine learning are not what I would call intelligent either.

    To be fair, I will grant you that if you take the dictionary definition of intelligence, which is just to acquire and apply knowledge, then yes- I can agree with the things you've said on some level. Machines can do that. But the conversations we've had about AI on this forum have been centered around the idea of simulating the human/animal process of intelligence - trying to simulate not just the vague abstract application of knowledge, but to do so the way humans do it, which I think goes beyond the dictionary definition of intelligence. A human makes decisions based on an understanding of the question and the context surrounding the question. A machine makes decisions because previous iterations of the simulation have rewarded that branch of options a higher weighted chance of success than other options. I don't think those two things are really comparable.
     
  19. narad

    narad SS.org Regular

    Messages:
    5,022
    Likes Received:
    1,592
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    I feel like we're talking around each other because we're not working with the same definitions of things like...learning, and intelligence. And that's okay, because these are quite abstract and seem to have defied decades of attempts to adequately define and test them, by super smart guys.

    But there still seems to be some fundamental misunderstanding of how some AI systems learn. Everything you've said makes it sound like we're feeding a lot of our prior information into the machine, that we've done the heavy-lifting ("People do those things on behalf of the machine, then provide it as the context to make the computer do something that looks like learning."). What is the context in AlphaGo? What is the context in Montezuma's Revenge? Or in the block-moving game? What are human's giving them that is the result of human learning?

    You brought up the millions of examples needed to train these systems, like it's some huge flaw. The human brain is the result of hundreds of millions of years of evolution -- it takes a lot of exploration and a lot of failure to find what works and refine it. By the time we're us, we have tons of biases in our brain, and we have many years of failures in exploring our world before we ever touch a video game.

    In contrast, I find it pretty amazing that an AI can be given nothing but what is essentially a screen and a joystick and a goal, and learn to play the game, when reward is delayed for thousands of actions. After you've bungled around for several thousand moves and you somehow manage to get the key in the door, you have to somehow figure out which of those things you did were important to reaching this good state (and in the process, what your actions do, what a key and door and ladder and platform look like, that you can't drop from a high platform, that you need to get under a ladder before you climb up it, that you must predict where enemies are going to be in the future and maneuver around them), and be able to apply that same learned strategy to novel environments. As simulated environments and goals become more complex, the goals more difficult, we'll see behaviors that are increasingly closer to what we think of as human intelligence.

    But that idea of machine learning as like, "find a good path, give it a high score!" is a gross oversimplification of what is actually going on as far as I'm concerned. A lot of human strategy could be called the same, i.e., "Yea, basketball, just find a way to get the ball in the basket, then try to do all those things again." Do that 50,000 times across a variety of opponents, and give high scores to all the things you did that worked in particular situations and maybe you've become a great basketball player.
     
  20. bostjan

    bostjan MicroMetal Contributor

    Messages:
    13,403
    Likes Received:
    1,458
    Joined:
    Dec 7, 2005
    Location:
    St. Johnsbury, VT USA
    As complex as Go is to play, the rules are very simple. Chess has more complex rules than Go, but is still quite simple overall. Real life is uncharted territory, in that we don't even know all of the rules. Sure AI could be used to figure out what the rules are on its own, someday, maybe, but it is not there yet.

    So a naughts-and-draughts AI consisting of eight lines of code can smoke anybody, because there is only one rule in effect dealing with how the game is won, and two additional rules in effect for playing. Chess AI intended to defeat a human player is going to use an opening book as well as preconceived end-game strategies.

    I mean, we are saying the same thing, I think:
    In other words, AI is potentially more effective at Go than it is at chess, and more effective at chess than it is at real life.
     

Share This Page