|stay gold ponyboy|
Google’s car is super cute. I almost wanted to give it a hug after I saw the picture in the Times. It belongs in a happy cartoon world where everyone obediently stays at the speed limit and stops at every stop sign and yields to every pedestrian and otherwise follows the law to a T, but unfortunately we don’t live in that kind of world. If you drive the speed limit on 280, you might be safe from the law, but what use is that if you’re fatally injured by the impatient, 75 mph driver behind you?
One quote in the Times article that I thought was a little bit haunting: “One Google car, in a test in 2009, couldn’t get through a four-way stop because its sensors kept waiting for other (human) drivers to stop completely and let it go. The human drivers kept inching forward, looking for the advantage — paralyzing Google’s robot.”
One more thing that’s not related to cars but still related to this idea. A movie called Ex Machina came out earlier this year that explored the boundaries of artificial intelligence. Without spoiling too much of the plot, what I gathered from it (not that I watched it. It was R-rated and I follow the law!!111!!) was that the only way robots are able to pass the Turing test and prove that they are human is to exhibit the most human trait of all – selfish manipulation.
Are humans, like the Federalists suggest, naturally evil and selfish? Cars are a halfway decent analogy for the economy because some may argue that the economy regulates itself – likewise, everyone on the highway has to drive while considering how fast/slow everyone else is driving. Is the natural speed check that comes with the presence of other drivers enough, or is governmental intervention necessary? And if governmental intervention IS necessary, how can we make the most out of it, considering there are still people who don’t take the regulations seriously?
What do you think? Feel free to give examples of situations other than cars, though you’re welcome to talk about that too.