Smarter Than Us Download Ú 104

Free read Smarter Than Us

Smarter Than Us Download Ú 104 ☆ [Reading] ➻ Smarter Than Us By Stuart Armstrong – Gym-apparel.co.uk What happens when machines become smarter than humans Forget lumbering Terminators The power of an artificial intelligence AI comes from its intelligence not physical strength and laser guns Humans st What happens when machines become What happens when machines become smarter than humans Forget lumbering Terminators The power of an artificial intelligence Smarter Than PDF or AI comes from its intelligence not physical strength and laser guns Humans steer the future not because we're the strongest or the fastest but because we're the smartest When machines become smarter than humans we'll be handing them the steering wheel What promises and perils will these powerful machines present Stuart Armstrong’s new book navigates these uestions with clarity and witCan we instruct AIs to steer the future as we desire What goals should we program into them It turns out this uestion is difficult to answer Philosophers have tried for thousands of years to define an ideal world but there remain. A humorous read on a serious subject the possible perils of an uncontrolled intelligence explosion I found it fun and informative a great primer for both newbies and those well versed in the idea of an intelligence explosiontechnological singularityIt is succinct and easy to read definitely worth the time1st part of a video interview on the book here

Free read õ PDF, DOC, TXT or eBook ´ Stuart Armstrong

Ead of % of the valueThough an understanding of the problem is only beginning to spread researchers from fields ranging from philosophy to computer science to economics are working together to conceive and test solutions Are we up to the challengeA mathematician by training Armstrong is a Research Fellow at the Future of Humanity Institute FHI at Oxford University His research focuses on formal decision theory the risks and possibilities of AI the long term potential for intelligent life and the difficulties of predicting this and anthropic self locating probability Armstrong wrote Smarter Than Us at the reuest of the Machine Intelligence Research Institute a non profit organization studying the theoretical underpinnings of artificial superintelligenc. The book is titled 'The Rise of Machine Intelligence' but it hardly ever talks about the positive aspects of AI The author does bring up a few valid points about the potential threat of AI but overall it comes across as an attempt at fear mongering Many of the examples are Hollywood inspired and some of them are just fundamentally flawed The author talks about the complexities in ordering an AI to fetch a person from a burning building and then goes on describe the need to define the person in terms of limbs body parts lifeline etc This is as absurd as saying that to write a program in a high level language that adds two numbers one has to define the numeric system explain the program in binary instructions and go all the way down to voltages and integrated circuits Neural networks are similar in that the higher layer neurons abstract away the details and can understand complex features composed of simpler lower layers AI systems based on neural networks would have the ability to understand a human as a whole making the author's example baselessAnother overarching theme is that to build safe AI one needs to define all the special cases by hand and this wouldn't be possible as there are too many of them This contradicts with the fundamentals of Machine Learning Unsupervised learning takes the complexity of special casing away and they learn features by looking at examples Self driving cars are not hard coded with all the objects they are not supposed to crash into Rather they learn by watching the world ala training data The book is an attempt to make people aware of the need to invest in precautions to prevent 'evil AI' from emerging but falls short as the examples are superficial and far fetched

Stuart Armstrong ´ 4 Download

Smarter Than UsS no consensus The prospect of goal driven smarter than human AI gives moral philosophy a new urgency The future could be filled with joy art compassion and beings living worthwhile and wonderful lives but only if we’re able to precisely define what a good world is and skilled enough to describe it perfectly to a computer programAIs like computers will do what we say which is not necessarily what we mean Such precision reuires encoding the entire system of human values for an AI explaining them to a mind that is alien to us defining every ambiguous term clarifying every edge case Moreover our values are fragile in some cases if we mis define a single piece of the puzzle say consciousness we end up with roughly % of the value we intended to reap inst. First I love that the book is published under a Creative Commons license That shows the author cares about spreading his ideas as far as possible and understands that copyright restrictions merely shrink the audience unless you manage to write the next Harry Potter and the Sorcerer's Stone I was tempted to give the book five stars just for thatThe book itself is short and easy to read at least for anyone with a modicum of computer science and philosophy It summarizes the potential dangers of AI and even briefly tells the reader how to help While Hollywood has labored for decades to instill fear of robots one day taking over the book explains how an actual AI disaster scenario could play out differently than the standard robot movie script But importantly the book outlines the intellectual problems that building a friendly AI poses including the problem of precisely defining what constitutes human well being a problem that has frustrated philosophers for than 2400 years Even if some unforeseen technical barrier prevents AIs gaining general intelligence the kind that could outsmart humans across the board humans should benefit from attention paid to moral philosophy Thus the book should be helpful even in the unlikely event it turns out to be unnecessaryBut being short the book has to leave a lot out These omissions struck me as rather curiousChapter 7 What Precisely Do We Really Really Want describes the difficulty of specifying a goal so as to exclude all solutions having undesirable conseuences The chapter does not mention the extensive fictional literature exploring this very uandary for example The Monkey's Paw and the List of adaptations of The Monkey's Paw Be careful what you wish for is an ancient fictional trope King Midas anyone The book does not need to cover the whole history of this trope but at least give it a nod People did think about this before computers threatened to make it realIt also couldn't hurt to mention that when you hire two humans to do a job for you the smarter worker usually figures out what you really want reliably than the less smart worker When you hire the most expensive attorney for example part of what you are paying for is expert advice on what your goals are Part of the intelligent expert's job is to disambiguate your initial perhaps vague or counterproductive reuest If you ask the competent expert to do something he or she knows you probably won't be happy with for example to pursue a legal strategy with a high risk of backfire the expert will seek to dissuade you If AIs will exceed every human skill as the book predicts then maybe they will help us figure out what we really wantA second omission seems to ignore the book's title A recurring theme throughout the book is that it's up to humans to make AI friendly or at least contain it all by ourselves That reuires humans to solve the problem of containing AIs or the problem of building moral compasses into them or generally the problem of predicting whatever troubling unknown unknowns they might unleash on us and designing in safeguards against those threats But the book also predicts that AIs will eventually exceed every human skill thereby becoming Smarter Than Us wouldn't that have to include the skill of building friendly AIs If thorny problems of moral philosophy have to be solved along the way AIs should solve them faster than we can A third omission or at least an underemphasis is the uestion of why AIs would care about the goals we give them If a committee of ants presented a human with a list of goals why would the human care what the ants want It's not enough merely to tell the AI precisely what we want it to do The AI must also want to do precisely what we command The book does mention that an AI would have the power to out think the humans who supposedly control it but so the AI could efficiently pursue its original goals programmed in by humans Why wouldn't the AI subvert its human masters comprehensively by inventing its own goals which less intelligent humans cannot imagine Maybe we won't be programming these things but propitiating them much as ancient and modern superstitious people believe they are propitiating their God or gods If ants were to propitiate humans who knows we might listenIf AIs do become smarter than us we'd better hope Sam Harris is correct in The Moral Landscape How Science Can Determine Human Values namely that science really does or can someday determine moral values If so in that best of all possible worlds scientifically skilled AIs should converge on the same morality that any other scientifically competent entity would discover only faster but perhaps with a few catastrophes along the way Humans have also stumbled repeatedly during their long moral evolution remember slavery Homophobia Sexism Beheadings Man made climate change History and current events are replete with human moral failings It doesn't seem too Pollyannish to suppose AIs should turn out to be scientifically skilled given that science is perhaps the most useful human skill If AIs will have all our skills only better then maybe AIs could be our natural allies in the ancient uest for moral progress Maybe AIs could teach us what it means to be moral