Anyone on here particularly religious?

Discussion in 'Politics & Current Events' started by Hollowway, Jul 24, 2017.

  1. TedEH

    TedEH Cromulent

    Messages:
    3,948
    Likes Received:
    474
    Joined:
    Jun 8, 2007
    Location:
    Gatineau, Quebec
    Agreed.

    What I mean by context is the set of goals/rules and the interface into the game- and the fact that we decided what those things are, not the machine. The machine didn't define it's own goals then work towards them. We told it win=good, fail=bad, high score=good, etc., and whatever other number of assumptions need to be strictly defined to support those things. If we had defined the opposite goal, we'd have a machine that's amazing at losing at games instead, but we had to provide those parameters. Human intelligence can interact with the world without a strictly defined interface- machines can't really do that. I mean that in the sense that if you disconnect that machine from that game, and put it in the middle of the basketball court, it's not going to be able to play that game. Even connecting it to a similar games has to be done via the right interface, you can't just hand it a mouse and keyboard, or open up Steam and say "have at it"- there needs to be something in between to say something like "look at this memory location for your high score", "count the length of time it took before losing as part of the success score", etc. It can't go "oh, this is also a game. I need to figure out how to win it". If you didn't specifically engineer it to move towards a goal, it wouldn't otherwise be trying to play the game, because it doesn't understand that concept. Computers don't have understanding and goals, they're engineered to work towards OUR goals based on OUR understanding.

    I think it's amazing too, I just don't think it's really "intelligence" as I understand it. There's still a long way to go, I think a much longer way than other people think.

    I disagree. I think that's a very good way to put it. That's exactly what the "training" does. There's a lot of fancy math and cleverness that goes into weighing these scores, and balancing out which branches to explore next, but I've yet to come across anything that suggests it boils down to anything else.
     
  2. narad

    narad SS.org Regular

    Messages:
    4,821
    Likes Received:
    1,190
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    But I don't think of that as an important part of learning at all. You take a kid and put them down at a chess board, you don't ask them to come up with the game -- you explain the game, and the learning and the entire challenge of the game is discovering effective strategies. I think knowing what strategies are good, and knowing when to apply them is the essence of understanding the game. I'd argue AlphaGo has a better understanding of Go than grandmasters, especially early to mid-game.

    And how did we get OUR goals? Are you happy? Are you hungry? Do you want to have sex with someone? These are innate to us at this point, and all our subgoals are just derived from these biological goals. I don't see a huge difference between telling the machine that it should maximize score, and seeing it then learn behaviors that involve jumping over pits and avoiding enemies, vs. telling a human it should get a great job to have a happy life and seeing it study hard in school and doing some extra-curriculars.

    And in humans there's a lot of fancy neurons and chemical receptors that go into strengthening these connections, which bias us towards behavior that has allowed us to achieve our rewards, but I've yet to come across anything that suggests it boils down to anything else. I actually think in this particular case the burden of proof is more on the side of "machine learning is fundamentally different from human learning", since we will never be able to say that they're fundamentally the same until we have a more thorough understanding of the human process.
     
  3. narad

    narad SS.org Regular

    Messages:
    4,821
    Likes Received:
    1,190
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    Yea, I mean it's not so dissimilar -- we agree that the worlds that machines are grounded in and that we are grounded in are separated by a great degree by the scale of rules and goals at play. I don't think of it so much in terms of rules, but the state-space, the number of choices I have at any one time, and the length of time/delay before I'm told whether or not that was a good move. But I guess my major point is that if we simulated the world, and simulated the biological goals that ultimately motivate us, together with millions of tweaks to the general architecture, that the framework that we have now is a pretty capable beast.

    Just as we are self-aware and a lot of neuroscience points to particular physiological design as maybe being necessary developments for this, motivated by an evolutionary need for it, I believe that a machine would also develop a sense of self-awareness if it had sufficiently complexity and this was advantageous to solving the types of problems it is being presented.

    I don't expect human-level general AI anytime soon, but who does? Singularity nutjobs? What I do expect is increasingly rich simulated worlds and challenges posed to these agents until they are learning hierarchical subgoals and strategies for reaching them, with essentially no hand-holding besides placing it in an environment and giving it a measure of success. Similarly, the ability to pick up just about any game and perform well at it -- we're already seeing signs of this with the Atari games, where the only thing that changes game-to-game is the score, but racing games are racing games, etc., and what we have now is capable of transferring some of that knowledge.
     
  4. bostjan

    bostjan MicroMetal Contributor

    Messages:
    12,983
    Likes Received:
    1,148
    Joined:
    Dec 7, 2005
    Location:
    St. Johnsbury, VT USA
    Good points, but in many-option scenarios, a true AI does not evaluate every option, it chooses an "ansatz" option and then tests and corrects. This first guess has to be programmed into the AI as an initial condition. How it corrects is based off of some rules programmed into the AI. So, a scenario with infinite rules and two options takes an infinite amount of time to run through one trial cycle, whereas a scenario with two rules and infinite options can still be completed by AI simply by choosing an option and evaluating the outcome quickly, then correcting. In other words, since the AI has to deal with all of the rules it is made aware exist, but does not have to deal with every option, the number of options is not as taxing as the number of rules in the scenario.

    From a religious perspective, I don't see how not expecting human-level AI anytime soon does anything to further the idea that we are actually all living in AI. :scratch: To be honest, I haven't read all of the responses everyone posted, so I guess that's not what we are talking about anymore.
     
  5. TedEH

    TedEH Cromulent

    Messages:
    3,948
    Likes Received:
    474
    Joined:
    Jun 8, 2007
    Location:
    Gatineau, Quebec
    Don't get me wrong, I'm not saying that machine learning isn't analogous to human learning at all, because on a very abstract level, there's clear parallels.

    Based on some previous pages of this thread.... yes, I think that's exactly who expects that. :lol:

    I guess, to me, while these machines are getting closer to a very generic/abstract definition of intelligence, they aren't close at all to what I think of as intelligence. Our disagreement comes down mostly to just semantics.

    I see those as very different. I human understands what it's doing, a machine just follows instructions. It has no concept of a goal, just programming that guides it to our goals. If you tell a human they should do something, they evaluate the meaning of question and decide what to do. You don't "tell a computer what to do" in the same sense as a person, it doesn't interpret the question and make a decision, it's literally just a machine doing what it's programmed to do. Like if I ask a person a question- "what time is it?", a person has an understanding of "what is a question" "what is time" "what is being asked of me" and can make any number of decisions as to how to answer this. If you ask a computer "what time is it" - it doesn't understand the question. You instead have to abstract a function getTheCurrentTime() that defines a set of instructions that will lead it (hopefully) to the answer we're looking for. The computer doesn't know what time is, it doesn't know what a question is, and it didn't parse the request for meaning before executing it.

    Even in the case of a script that gives the question as a string that needs to be tokenised and parsed for the question, it's still just a set of pre-defined rules defined by the programmer to decide what instruction belongs to what key word. There is no real "understanding" going on, on the part of the computer. Just like the engine of a car doesn't "know" what a road is, despite being able to travel on one analogous to a person walking on it (although much better at, to again continue the pattern of machines doing things better than us). That doesn't make the car smart, by any definition.
     
  6. narad

    narad SS.org Regular

    Messages:
    4,821
    Likes Received:
    1,190
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    There's a lot to reply to but it's time-consuming to come up with maybe the best reply. It's worth quickly pointing out though that one would have to basically solve all of philosophy of mind before being able to claim, with support, that humans understand <anything> and a computer that does <that thing> doesn't. You know, back to the Chinese room.
     
  7. bostjan

    bostjan MicroMetal Contributor

    Messages:
    12,983
    Likes Received:
    1,148
    Joined:
    Dec 7, 2005
    Location:
    St. Johnsbury, VT USA
    Maybe an AI could answer better. :)

    Understanding, by definition, is the ability to make inferences based on knowledge. I don't think there's really much of an interesting problem here, since artificial intelligence has not gotten to the point where it can make connections between the knowledge presented and information that others would consider unrelated, as a human does (inferences). It's not a process thing, it's an experience thing. Could AI get to that point? Absolutely. Is it there now? No. :shrug: I think it's pretty cut and dried at this point.
     
  8. narad

    narad SS.org Regular

    Messages:
    4,821
    Likes Received:
    1,190
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    If you take that definition I feel like we're already there. It's an invalid comparison really since AI is typically grounded in a single-task world, so there is no chance to make a connection across tasks, to give it the feel of human reasoning.
     
  9. TedEH

    TedEH Cromulent

    Messages:
    3,948
    Likes Received:
    474
    Joined:
    Jun 8, 2007
    Location:
    Gatineau, Quebec
    I suppose it could be argued that in order to design a computer or piece of software that's analogous to a human thinking process, we'd need to solve those things anyway.

    I suppose I make the distinction between what a machine "knows" and what the engineer who programmed it "knows" about the data it's operating on. Computers are all about abstractions on top of abstractions, where the machine doesn't really care about what the data it's working on really means. Like if I make a game, and give a character an integer to represent health, the knowledge of the literal value of the integer sort of belongs to the computer (in the sense that it knows there's an integer at a particular place in memory that will be requested and operated on at some point), but the knowledge of "this represents the character's health" is not owned by the machine, it's owned by the programmer. The programmer knows what it means, and tries to convey it through instructions. The computer doesn't really "know" anything. That same integer could be literally anything as far as the computer is concerned.
     
  10. narad

    narad SS.org Regular

    Messages:
    4,821
    Likes Received:
    1,190
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    But if you give it that integer and it some other number (representing whether a goal is reached), and it goes and makes all these actions and eventually is hopping around, grabbing keys, climbing ladders, dodging enemies, then something has internalized that that integer reaching 0 is very bad wrt it accomplishing its goal. So in the confines of the world in which it lives, it learns a very practical understanding of the semantics of that integer. Of course we need to provide that number to the machine, because we are the creator of that world -- where else is it going to come from?
     
  11. TedEH

    TedEH Cromulent

    Messages:
    3,948
    Likes Received:
    474
    Joined:
    Jun 8, 2007
    Location:
    Gatineau, Quebec
    But it didn't internalize that, it was given that as a condition by the programmer.

    From the programmer. IMO everything a computer does is just an expression of the programmer who made it do that.

    Take something like a car. A car has no intelligence, it has no "knowledge". The car doesn't understand the road, it has no concept of acceleration. There is sort of knowledge embedded in the engineering of the machine, but it's an expression of those who built it. When you look at your speedometer, we sort of anthropomorphize the display and say it "knows" how fast you're going. But it doesn't. That display is just a reaction to an arbitrary input - a supplied voltage transformed into something that moves the needle. The system it gets that voltage from similarly doesn't "know" about the speedometer. The knowledge exists entirely in the engineer who said "if we do x math to determine speed, turn that into a signal, and supply that voltage to something that transforms the voltage into human-readable display, then we can tell how fast this car is going". The designer knows this. The mechanic knows this. The programmer knows this. The car has no knowledge. The computer has no knowledge. You could supply that same display with any voltage and it will still tell you "how fast you're going" because it doesn't know what that voltage is, it doesn't know what "speed" is.

    I'm not saying that what the machine is doing isn't an expression of knowledge, but it's an arbitrary display of an engineers knowledge, not an intelligent display of the machine's understanding of anything.
     
  12. narad

    narad SS.org Regular

    Messages:
    4,821
    Likes Received:
    1,190
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    I feel like we're going in circles, but just to stick to your analogy, what about a self-driving car that has a camera in the driver's seat that takes images of the speedometer, and drives within speed limits, slows down when kids are playing on the side of the road, and otherwise learns drives to the destination, never being given a single integer representing that value? What information it has it received from the programmer regarding the speedometer?
     
  13. bostjan

    bostjan MicroMetal Contributor

    Messages:
    12,983
    Likes Received:
    1,148
    Joined:
    Dec 7, 2005
    Location:
    St. Johnsbury, VT USA
    It'd still have to be programmed on how to parse the visual information from the speedometer into some numerical value (most likely a positive integer). It'd have to be programmed on how to identify speed limit signage and parse that information into integer values. It'd also have to be programmed on how to identify "kids" "playing" and "on the side of the road" and parse those into some sort of actionable information before it would be able to react to observing such a thing.

    A human typically already contains all of these subroutines from basic upbringing. Being a kid, at a young age, one learns what a "kid" is, what "playing" means, and, hopefully, is taught where "the side of the road" is and what sort of danger that presents. A human is taught how to read meters and numbers at a young age, and thus, is able to parse information from road signs and vehicle instruments.
     
  14. TedEH

    TedEH Cromulent

    Messages:
    3,948
    Likes Received:
    474
    Joined:
    Jun 8, 2007
    Location:
    Gatineau, Quebec
    This wouldn't happen, that's not how it works. All of that information would just come from the software being fed streams of numbers. If you don't give it that data, it would never do any of those things. The camera used to detect obstacles? All it does is turn light into integers and feed it to the next stage, just like the previous example of the speedometer.

    Everything it "knows" comes from a programmer. A self driving car is still just a piece of software. It literally goes (if (somethingInTheWay()) evade()). Evasion would be a pattern given to it by a programmer. somethingInTheWay() is an pattern finding algorithm written by a programmer. Nothing a self-driving car does is it's own unique idea- it was programmed to do those things.
     
  15. narad

    narad SS.org Regular

    Messages:
    4,821
    Likes Received:
    1,190
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    ^^ none of that stuff is programmed with modern DeepMind style DQN/Atari game solving. Pixels of screen + score (as an integer). There's still a hundreds of other things on the screen it needs to learn how to understand that are completely analogous to what the autonomous camera driver must do.

    I don't want to beat the dead horse here but that's not at all how a (modern) self-driving car works. There is no turnLeft() function etc. None of that stuff is explicitly programmed. There are sensors being fed to model, that model trained in simulation to optimize arriving at its destination. Most of these sensors are less like the integer, where one number has a clear semantics (to a human), and more like big matrices of floats that are completely incoherent to humans.
     
  16. TedEH

    TedEH Cromulent

    Messages:
    3,948
    Likes Received:
    474
    Joined:
    Jun 8, 2007
    Location:
    Gatineau, Quebec
    ^ I'm well aware that I've oversimplified it, but it's still driven by the knowledge of the engineers who trained it. They didn't just feed the car with "here's a view of the world, do what you will with it". There very well might be an explicit turnLeft() function or event. I'm certain there would be an interface along the lines of setAccelleration(), setBrakingStrength() etc., as well as something like detectObstacles(), etc. The car didn't learn or invent those functionalities on their own - while parts of the algorithms are sort of fluid because of the learning/training setup, the interface, definitions, and constraints these are being used in are explicitly provided.
     
  17. bostjan

    bostjan MicroMetal Contributor

    Messages:
    12,983
    Likes Received:
    1,148
    Joined:
    Dec 7, 2005
    Location:
    St. Johnsbury, VT USA
    Is the example a self-driving car or a game?

    Actually, it is explicitly programmed. Keep in mind that self driving cars are on the cusp of existing at the moment.

    A self driving car, as it works best in October 2017, is a car with a GPS and a set of sensors. Those sensors identify which objects are where on a 3D mapping of space around the car. Objects are identified by matching their shape into a library of preprogrammed shapes. One of the things that is still under some review is how to deal with shapes that are not in the library. There are tons of subroutines programmed into these cars to make sure that they are able to detect objects accurately enough to not run over pedestrians, etc. How a self-driving car would react to a plastic bag blowing across the street is still someone arguable, since none of the software for these things is finalized yet.
     
  18. narad

    narad SS.org Regular

    Messages:
    4,821
    Likes Received:
    1,190
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    I don't know what to say really other than that doesn't fit at all with what I know about how self-driving cars work. I suppose some particular company's implementation may perhaps look like that, but research into autonomous vehicles isn't looking up shapes in a library of pre-programmed shapes, calling subroutines, etc.
     
  19. bostjan

    bostjan MicroMetal Contributor

    Messages:
    12,983
    Likes Received:
    1,148
    Joined:
    Dec 7, 2005
    Location:
    St. Johnsbury, VT USA
    Hmm. Care to share your knowledge of how these other companies do it, then? Because that's exactly how Google's system works. I didn't do the programming, but I built some of the sensors that were integrated into that design, so I've had a peek.
     
    TedEH likes this.
  20. narad

    narad SS.org Regular

    Messages:
    4,821
    Likes Received:
    1,190
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    Did you make your sensors prior to 2014? I mean, I'm not able to reveal some examples that I know in particular are not like that, but Nvidia does lots of press with its self-driving cars so there's plenty of videos and discussion of what's going on in the system. None of these subroutine-y things or library look-up things are going one there. And that's just part of the industry trend away from such things that are inherently more brittle than learning it end-to-end. MIT's DL for autonomous driving course also talks aall about this.
     

Share This Page