Separate names with a comma.
Discussion in 'Politics & Current Events' started by Hollowway, Jul 24, 2017.
Nvidia uses libraries to identify objects: https://developer.nvidia.com/driveworks
Unfortunately, it doesn't help the discussion to know things that you can't reveal.
I'm willing to admit that the kinds of "AI" I work on put me pretty far from any actual machine learning, so I don't have any kind of deep insider information on how close machine learning is to actual human thought processes, but I'm not convinced we're as close as joe-facebook-user thinks we are. I could be wrong, but my level of understanding of how computers work and how software gets made keeps me from being able to wrap my head around the idea that computers really "know" or "understand" anything.
My reference point being video games (and I'm sure anyone else who works in software would laugh at that as a reference point) makes me well aware of how much that goes on in a computer is about the illusion and experience of something happening, moreso than anything actually happening.
But almost no videos games have AI in any meaningful way. If AI was video game AI, we wouldn't talk about AI replacing jobs, etc.
I'm not asking you to. For instance, the Nvidia self-driving car, here:
And that's a year ago.
Yes, Nvidia distributes libraries (not to be confused with matching an object to the closest thing in a library of shapes, not the same sense of the word) which are modularized versions of these components, but that's just part of Nvidia trying to branch out and market its in-house AI tech. Above is what I'm referring to (or any of there big 1-2 hour demo presentations they show at conferences).
Nope, click the link and read the information on their page, and look at the photos. They clearly state and demonstrate that they use image recognition libraries.
EDIT: I guess the discussion here might as well end. You have a hardware developer and a software developer giving specifics and you are claiming secret special knowledge of something else, and you won't divulge what it is other than to say that everyone else is wrong. I think you are aware that it's highly frustrating to try to have a discussion once something reaches the "I know you're wrong, but I can't do anything to prove it" phase.
Yes, they distribute image recognition libraries? I'm a bit lost as to why that's important? I'm trying to use the example of the car above to further this larger point that self-driving cars can learn their own representations of objects from sensor data. That objects and measurements need not be explicitly provided by the programmer. This is entirely evident from DeepMind's work but I'm here talking cars because cars were brought up.
Could someone use a modularized image recognition model - sure - that's just not here or there, since I'm not trying to make a claim about all self-driving cars.
This all spurred off from our disagreement over how AI works - whether things are hard-coded into it or not. You keep bringing up examples where nothing is hard-coded, but...those examples, upon further examination turn out to rely on a lot of hard-coded things. Why is it important? Because you said self-driving cars don't work this way or that way, and I provided and example that did, you provided a counter example, I did a five-second google search that took me to the official website that actually backed up my example of how it works. I posted a link to that website, but you still disagreed...
I mean, if you mis-spoke or something, or you're just shooting from the hip and turned out to be wrong, or even if this is just a misunderstanding due to communication (seems unliked from the past two pages of this thread clarifying these statements), then just move on. You don't even really have to acknowledge it - you could simply leave it alone, if you wanted to. I thought I was ready to move on from this thread, but I made the mistake of clicking on it again just to see you having the same argument with someone else as we were having months ago in another thread - I stay away from the thread a little bit, but you're still arguing this thing for pages and pages of the thread.... To what end, though? So I made the huge mistake of trying to get involved in this again, and you ask me why it's important to defend my observation (which I believe I am in a strong position to do so, logically, although, rightly, what is the point, if it makes no difference to you anyway?!). I'll turn that question back the other way - why is it so important to you to be right about self-driving cars in this case, so important that you would argue with multiple other users across multiple threads over it?
I get the impression we all agree roughly on how AI sort of "works", but have different interpretations of the implications of it. To me, and I hate using the "my own definition" route, but what AI does and how it works doesn't fit with what I understand real intelligence to be. I have no doubt that machine learning is a threat to jobs- but I have strong doubts that we're anywhere close to being in the matrix. That's all I really meant to get across.
I get what you mean by that, but still disagree on some level- or on two levels rather. One is that when the discussion is about replacing jobs, the average person who doesn't know anything about hardware or software is unable to make this distinction, so they factor it in. "Games are so lifelike! We're getting close to being in the matrix! GlaDOS is going to take our jobs!" This is a large part of why I think the average person believes AI is farther than it really is. The second level is that I think games vs. machine learning simply abstract a different part of what we call intelligence. One abstracts the process of applying knowledge, as you've described, and the other abstracts interaction. I mean, realistically, you have yourself used early board-game playing software as an example- and that's a lot of time VERY comparable to what AI in games is doing today.
From Nvidia: "Instead, the car learns on its own to create all necessary internal representations necessary to steer, simply by observing human drivers." That means these things are not hard-coded. What is there to argue about? I post something that, I would say, is definitively in support of my position, and you bring up these other libraries or talk about some other cars. I don't care about other cars -- I care about the cutting edge autonomous car research, as that is the area most relevant to the big AI questions. If you post some self-driving car that has sub-routines, it doesn't make the above example car disappear. The bigger argument is whether it's possible, not whether it's necessary.
I don't think I mis-spoke at all, but I still feel there is a disconnect between the AI you describe and how deep reinforcement learning models (or many just end-to-end differentiable models) work. I mean, I feel bad about myself for not being able to make this point but: a system that has pixel input, a set of controls (not high-abstracted controls, keystroke controls), and a single integer it is trying to maximize -- that's not a lot of hand-coding, given that the AI is learning superhuman performance in a number of computer games. Google didn't buy DeepMind for half a billion because they coded up a bunch of subroutines and domain knowledge.
This thread is filled with artificial intelligence.
I get the feeling you're taking the marketing type a bit too literally. I would be incredibly surprised if there's not a TON of context-specific software driving (pun slightly intended) these cars, both to direct the machine learning components and outside of the machine learning altogether.
Kind of putting me in a tough spot. If I present articles that describe the AI as the way I have proposed, it could always be argued I'm just taking it to literal. There'd be no way to prove my position.
I guess I don't understand where your perspective comes from. If you're not a software or hardware guy, and are completely basing your conversation off of what you've read off of these articles, then yes, I think you're taking marketing text too literally. I'm taking the point of view that, given what I know about how software is made, lots of it is just smoke and mirrors, and public communication about what software really does tends to be.... exaggerated? Not quite accurate? You get the picture. If you're a software guy who works in something related to machine learning, or have some other kind of view into that world that we don't, then maybe you're on to something and we're way off the mark. But if that's not the case, then we're all just speculating on how we *think* AI works, and what the implications of that would be. I think Bostjan is probably in the best position of the three of us to really evaluate the state of the technology.
I don't know - I think what someone's background is important if they are taking the time to read the information that's available. An argument should stand on the points made. Just reading the deepmind research blog should be enough to convince you of how little information is encoded in these models (or read the DQN paper), and if course that particular Nvidia demo car.
I mean, I agree that hype is out there but these are research driven companies and if they were so explicit about the model conditions (as in the Nvidia car video description), and somehow were exaggerating, it would actually cause quite a controversy. It wouldn't be exaggeration or spin, it'd be dishonest.
^ That doesn't address my point though. I still don't know where you're basing your viewpoint from. I have no context to put your words into other than you read some stuff on the internet. If you have zero software experience, then I think you're missing just how much what computers do is smoke and mirrors.
I disagree completely. You can't just take everything a company says at face value. Reading a company's public description of what their software does isn't going to tell you enough about the implementation to make the jump that they've literally created intelligence comparable to a human brain.
Interesting that this morning there's a bunch of articles circulating around claiming that scientists have proven simulation theory definitively wrong.
That pretty much disproves Bostjan's whole rebuttal whining, and demonstrates that a model can learn a complex behavior without being given explicit knowledge of the obstacles in its environment, which is the main point of this 2-3 page back-and-forth (and took all of 5 seconds to Google). You can of course do things with explicit knowledge, but that's not the point. I bring up examples like this as a counterexample to the idea that this intelligent behavior is somehow the programmer's knowledge embedded into the system.
If you don't trust the description, read the paper:
I think the burden of proof is on the skeptic to find the untruth in a published scientific paper...
You know, you can lead a horse to water...
The first sentence literally says they mapped the network to explicit steering commands.
Yea...you kinda need to turn the steering wheel in order to drive...
Yeah, but you said earlier that they didn't hardcode any of that - that the car "knew how to drive" and that there were no explicitly programmed functions attached to it.
I don't think anything in that paper negated my view of this all being human knowledge that the machine doesn't literally understand. A neural network was trained on very carefully selected inputs, with a specific interface, the feedback loop for the training very carefully crafted to get the desired result, and a bunch of very context-specific software to get the intended result out of it. Yes, it's impressive, but it's still just a pattern recognizer used cleverly to steer a car. A very complicated, advanced pattern recognizer, but just a pattern recognizer. That network is only a small part of the engineered system that makes up the driving of the car. The network still doesn't know what "steering" means, it just spits out a number that we've mapped to steering angle, based on the images it was trained to do that for. The network, along with the cameras and software etc., are still parts of an expression of human engineering and knowledge.
Do you just want an AI to sprout up organically or something?
To be more clear, this is why I pointed out the first line of your link:
But this is exactly what that car is doing. The network generates an angle, then explicitly calls a steering function- something like (SetTurnAngle(angleFromNetwork)) - or to put it another way, there is a turnLeft() function.