tag:blogger.com,1999:blog-6254594496888680289.post2574628383043001043..comments2023-01-31T08:41:00.115+00:00Comments on JerseyToday: Athens regained?Mark Forskitthttp://www.blogger.com/profile/09189827278867422775noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-6254594496888680289.post-71463900247244351392016-03-19T07:52:33.212+00:002016-03-19T07:52:33.212+00:00Where does it get purpose from? Thats the big que...Where does it get purpose from? Thats the big question. If evolutio is right it must be emergent somewhere, and that means machines might be able to derive purpose for themselves. For a creationist it is a different issue -it is only there if the creator put it there. At least until intelligent machines start building machines more capable than themselves, which is on the cusp of happening.Mark Forskitthttps://www.blogger.com/profile/09189827278867422775noreply@blogger.comtag:blogger.com,1999:blog-6254594496888680289.post-36334653422466916192016-03-18T08:25:01.966+00:002016-03-18T08:25:01.966+00:00Think we are a long way off Star Trek's Data!
...Think we are a long way off Star Trek's Data!<br /><br />The main problem with AI is teleological. Where does it get purpose from? Human beings create values, but they are also have purposes defined by evolutionary drives: survival, propagation of species. But we have mixed values, which leads to choices being made and deliberated, sometimes counter to evolutionary drives. For instance, contraception and falling birth rates are a counter-evolutionary choice.<br /><br />Where do AI systems get their directives from? That is a question which has been explored by science fiction writers for many years, usually with rather bad consequences for us! AI systems are relatively simple compared to biological systems, and do not have the complexity that comes with evolution.<br /><br />Currently AI systems are limited to learned choices within constraints. In other words, they can make intelligent decisions but within a limited framework, rather like a diagnostic online health question and answer system. They are not “open ended”. <br /><br />And even then there are problems. Google’s driverless car had a prang when it “assumed” a bus would slow and let it out. But human behaviour has an innate cussedness, as my American friends would say, and the bus didn’t stop. The Google car hit it. Did th systems learn from the mistake? Perhaps, but would you want to be in a car at 60 mph making a mistake? Or a plane? <br />TonyTheProfhttps://www.blogger.com/profile/10486414706261508994noreply@blogger.comtag:blogger.com,1999:blog-6254594496888680289.post-86988933417977089332016-03-15T06:12:33.838+00:002016-03-15T06:12:33.838+00:00So that probably puts the machine on a par with my...So that probably puts the machine on a par with my schoolboy German. <br /><br />This might be of interest...https://www.sciencedaily.com/releases/2016/03/160307093515.htm AI crossword-solving application could make machines better at understanding languageMark Forskitthttps://www.blogger.com/profile/09189827278867422775noreply@blogger.comtag:blogger.com,1999:blog-6254594496888680289.post-34870197878490547312016-03-14T08:33:25.044+00:002016-03-14T08:33:25.044+00:00Of course things have changed over a quarter of a ...<i>Of course things have changed over a quarter of a century. Certainly machine learning has advanced significantly.</i><br /><br />...but it still can't translate simple unambiguous German into grammatical English, much less cope with nuance or idiom.Jameshttps://www.blogger.com/profile/09194881271051758232noreply@blogger.com