Separate names with a comma.
Discussion in 'Off-Topic' started by narad, Nov 2, 2017.
In general I think the main problem with the current AI development is the concept of generalization which is still completely out of reach. Our best AIs are really, really, really specific. They can play GO, but the AI that knows how to play GO doesn't have the slightest idea is there a tree in a picture and even less how to drive a car.
Here are a few good articles that analyse the current situation:
Expressivity, Trainability, and Generalization in Machine Learning
The Impossibility of intelligence explosion
I understand where you're coming from, bostjan, but I think the law makers and insurers will see it differently because there are already similar situations that will function as precedents of sorts.
If an employee with their own personal liability insurance is driving their personal car while working for the employer and crashes into a Ferrari, the employer can be held liable separately or jointly.
If you hire someone to kill your wife to collect the life insurance money, you're both guilty of murder, not just the guy you hired to do the dirty work.
And people suing tend to go after money, so they'll typically add as many parties to the suit as they can in the hopes that one or more of them will pay out.
Whoah, there, that analogy is weird.
If I buy a self-driving car and it hits someone, how is that anything like hiring a hitman?!
It's more like if I bought a washing machine, and it spun out of control while washing my clothes, and smashed my room-mate's stuff. If my room mate sues my homeowner's/renter's insurance, they'll simply counter that it's on the manufacturer, because it's a manufacturing flaw.
But it's not even really too much like that, either, because driving a car means reacting to, potentially, a thousand idiot drivers on the road all trying to kill each other. Most of them (I hope) have insurance, but many do not. So here are some scenarios:
A) Self-driving car glitches out and runs over a pedestrian.
B) Self-driving car does nothing wrong, but some maniac without driver's insurance of any kind crosses the center line and smashes into it.
C) Self-driving car miscalculates something, based off of bad map data and somehow ends up falling in a hole.
D) Self-driving car does nothing, but a random sink hole opens up right in front of it, such it cannot avoid falling in.
Case A - totally horrible news, but how liable is the owner? Certainly not criminally liable. What if the owner has no insurance? He should have operator's insurance, right? Except he's not operating the vehicle, he's a passenger. Maybe he should have taken out operator's insurance for whomever programmed the AI? Hmm, I think it's debatable at any rate.
Case B - Obviously not the owner's fault, why would he be liable at any level?
Case C - Ok, now it's between the owner and the manufacturer. If the owner has no operator's insurance, then the manufacturer should address this anyway. If they don't, then probably the courts would be involved.
Case D - Who is liable? I think this one is sticky.
In none of those cases, though, do I see where operator's insurance would be necessary or even really do anything.
A - The manufacturer would most likely be held liable in this case.
B - The car did nothing wrong, the other vehicle crossing into oncoming traffic did, so that party would be liable.
C - The manufacturer of the map would most likely be liable if there were any death or injury to anyone or any damages to the hole or other property belonging to another party.
D - No one would be liable as this is a freak accident. Note that liability insurance only covers the injury or death to other parties or damage to others' property that you cause, so you'd still be on the hook for replacing the car unless you have comprehensive coverage. (You'd also be on the hook for paying it off if it were financed or leased, though the finance/lease company would have required comp/collision and possibly gap insurance to cover their ass in such cases).
In cases A, C, or D where there is damage to another's property or injury/death to a person, the injured party can still come after you as owner of the vehicle. (Whether or not they would win would depend on the circumstances, but it is a possibility).
A & D are both freak accidents.
B - the other driver doesn't have insurance, you're both screwed.
C is closely related to A and D imo.
Got some AZ commentary videos -- very cool:
That's the kind of stuff I was hoping to see. There was a time, not long ago, when that defense was gaining in popularity, but now, it looks like it's probably just a bad strategy. The continuation of the story, however, is yet to come. Is there a better strategy in the opening for black, or is black just necessarily at a disadvantage? Seeing as how AlphaZero seems to have won as white and forced a draw as black, it looks like black might simply be at a disadvantage.
Studying chess, you always want to pick apart games played by players who are much better than you are, and, in this case, we have an AI that is superhuman playing another superhuman AI... the situation should be academically very rich for pretty much anyone interested in chess. If I ever have time to get into this again, I would love to have the logs from all of these games, just to try to wrap my head around what was going on in them, but I've never played anywhere near this level, and I'm also well out of my prime now, since I haven't played a halfway serious game of chess in over ten years.
And on the flip side... AI puts Gal Gadot's face onto pornstar's body:
On the flip-flip-side, and forgive me for thinking this up, but couldn't we now use AI to create tons of fake child porn and flood the underground exchanges with the aim to disincentivize sex traffickers from abducting actual kids?
Yeesh the implications for the face swapping tech could be kind of insane.
A lot of pedos from what I've read look at 3D/loli porn already, plus evidence points towards having access to crap like that lowering rates of child sex abuse.
Milton Diamond, from the University of Hawaii, presented evidence that "[l]egalizing child pornography is linked to lower rates of child sex abuse". Results from the Czech Republic indicated, as seen everywhere else studied (Canada, Croatia, Denmark, Germany, Finland, Hong Kong, Shanghai, Sweden, USA), that rape and other sex crimes "decreased or essentially remained stable" following the legalization and wide availability of pornography. His research also indicated that the incidence of child sex abuse has fallen considerably since 1989, when child pornography became readily accessible – a phenomenon also seen in Denmark and Japan. The findings support the theory that potential sexual offenders use child pornography as a substitute for sex crimes against children. While the authors do not approve of the use of real children in the production or distribution of child pornography, they say that artificially produced materials might serve a purpose.
Diamond suggests to provide artificially created child pornography that does not involve any real children. His article relayed, "If availability of pornography can reduce sex crimes, it is because the use of certain forms of pornography to certain potential offenders is functionally equivalent to the commission of certain types of sex offences: both satisfy the need for psychosexual stimulants leading to sexual enjoyment and orgasm through masturbation. If these potential offenders have the option, they prefer to use pornography because it is more convenient, unharmful and undangerous. (Kutchinsky, 1994, pp. 21)."
I fear more for the people who will use that as a means for revenge porn
There was a Law and Order SVU episode about that.
Whatever makes money, I guess...
But I feel like in about 3 years on the horizon, we'll no longer be able to trust any video/audio. Hopefully that mistrust will prevent anyone from being shamed from fake porn or any other form of fake video.
Yeah, that's the most hopeful I feel one can look at it, I just view it more as like, even if it's out there and it IS fake, do you want to know that that exists and that it's being watch. Or, if you don't want it out there, it's now going to be on you to get it cleaned up and removed from everywhere you can find it; keyword being find, not exists.
In response to @bostjan and @tedtan —
I work as a programmer at a research institute’s self driving car program. Our AI can handle a lot of driving situations and is already deployed in military vehicles. It isn’t perfect for civilian roads yet, but we haven’t put much effort into making it good for civilian roads because litigation is an issue. Legal issues are by far the biggest thing holding self driving cars back. The tech is there. And the way the industry is right now, companies will not profit from embracing self driving cars, so it won’t happen for a long time.
Self driving cars are safest if every car is also self driving and they can communicate. Due to the massive transition in the vehicle economy this would require, and the legal issues, we won’t be seeing them for a long time. Effective public transportation is a much more efficient and likely thing to support.
Exactly what I strongly suspect as well.
I worked in the auto industry on little projects that supported this bigger project more than ten years ago. I can imagine that this has come a long long way since then.