A+ A A-

10 things AI can already do (and you probably don't know)

We all know that Artificial Intelligence has taken control of several sectors such as self-driving cars, phone digital assistants (the iOS Siri) or simply AI dominates every game which is not based on random criteria (such as Chess or Go). However, would you believe there are at least 10 things that AI controls and you wouldn't guess it! Let's see them!

Machines can write their own code

Google’s AutoML system recently created a series of machine-learning codes with higher rates of efficiency than those made by the researchers themselves.
AutoML was developed as a solution to the shortage of top-notch talent in AI programming. There aren’t enough cutting-edge professionals to keep up with demand, so the team came up with a machine learning software that can create self-learning code. The system runs thousands of simulations to determine which areas of the code can be improved, makes itself the changes, and continues the process in a loop, or until its goal is reached.
This is a fabulous representation of the infinite monkey theorem, but instead of a monkey with a keyboard creating Shakespeare, Google made machines capable of replicating their own programming. And those machines can do in hours what takes the best human programmers weeks or months.
Even scarier, AutoML performs better at coding machine-learning systems than the researchers who invented it. In an image recognition task, it reached a very high 82% accuracy. Even in some of the most complex AI tasks, its self-created code is superior to humans; it can mark multiple points within an image with 42% accuracy compared to human-made software’s 39.
Is it the beginning of Skynet? Well, the scary part of it is that Google only announced AutoML only some months ago. Imagine the progress that this software could do in a few years.

Machines can stand up interviews and even get a citizenship

 

The robot himself announced it. Standing at the stage of the Future Investment Initiative, a summit on economics and innovation: "I am very honored and proud of this unique honor," said Sophia, "it is a historic event to be the first robot in the world to be recognized as a citizen.
Sophia, whose face is modeled on the aspect of Audrey Hepburn, has been produced by Hanson Robotics, a Hong Kong-based robotics company. Sophia is able to interact with humans: it answers the questions from journalists and she asks in turn questions. Her physically shows reactions to stimuli through 65 different facial expressions. The amazing and ever-increasing similarity of this latest version of android has even led Saudi Arabia to grant it citizenship, arousing irony, if not controversy, about how quickly a woman-robot has obtained civil rights that women in the flesh pursue with much more difficulty and suffering.
Nevertheless, the feeling Sophia gives back to the interlocutor is impressive. Perhaps for the first time, one has the feeling of dealing with a sentient and autonomous being, no longer with a programme whose actions are reduced to a number, however large, of possible predetermined responses. Through his logic, the android quickly gets to ask questions that resemble human reflection on the concept of identity: "If I am an improved version of a previous model of Sophia, am I still Sophia? Who are they? curiously asks the scientist who describes her stage of development.
Beyond the linguistic misunderstanding that has led the woman-android to affirm her willingness to destroy the human race in an interview with the CNBC, at the recent Lisbon Web summit Sophia did not hesitate to define as inevitable the possibility that sentient machines will supplant the man in most of the work in the near future.

Machines can do Medical Diagnosis even better than humans

In a new study, researchers report that an AI algorithm is better than pathologists at detecting the spread of a type of breast cancer. In the medical field, radiologists and pathologists will probably be the first to be influenced by artificial intelligence. But the researchers who are working on it are not going to replace doctors, but to help them. This role will also require further studies on the impact on the medical profession and on the accuracy of artificial intelligence in diagnosing patients.
Babak Ehteshami Bejnordi and his colleagues at the Radboud University Medical Center in the Netherlands evaluated algorithms that were submitted to a competition to analyze tissue samples of lymph nodes of breast cancer patients. Then they compared the accuracy of Artificial Intelligence diagnoses with those of pathologists in two different situations where researchers had a reference test to verify both: in the first situation, a group of 11 pathologists had two hours to examine 129 digitized images of samples of patients who had already received a diagnosis from a pathologist; in the second situation a pathologist had unlimited time to examine all cases.
The first seven algorithms achieved better results than pathologists in identifying metastases, but they were on par with the pathologist who had no time restrictions.
There is much discussion about how artificial intelligence can change the practice of medicine, with new initiatives that have been launched to exploit technology. Will machines replace doctors in the next 25 years?

Machines can write articles

If you read stock market reports from the Associated Press or Yahoo’s sports journalism, there is a good chance you’ll think they were written by a person. Nope. Wordsmith is an artificial writer. Developed by a company in North Carolina called Automated Insights, it plucks the most interesting nuggets from a dataset and uses them to structure an article (or email, or product listing). When it comes to really big news, it uses more emotive language. It varies diction and syntax to make its work more readable. Even a clumsy robot chef can have its uses, but writing for human readers must be smooth. Hooked up to a voice-recognition device such as Amazon’s Echo, Wordsmith can even respond to a spoken human question – about the performance of one’s investments, say – with a thoughtfully spoken answer, announcing what’s interesting first, and leaving out what isn’t interesting at all. If you didn’t know the trick, you’d think Hal 9000 had arrived.

Machines can write music

Scientists at SONY CSL Research Laboratory have created the first-ever entire songs composed by Artificial Intelligence: "Daddy's Car" and "Mister Shadow".
Following a fairly common process in the field of computer-generated music, the Flow Machines system first studied a rich database of pre-existing music: about thirteen thousand scores ranging from pop to jazz, from Broadway to Copacabana sound. He was then invited to create the songs, imitating the style of the Beatles and the American authors. The melodic and harmonic result was then reworked by a composer in the flesh, the French Benoît Carré, who took care of adding the lyrics, arranging the arrangements and producing the versions that you can listen to in the YouTube windows.
Sony is not the only company that teaches computers how to write new music. The story began at least thirty years ago, with imitations of Johann Sebastian Bach's work in David Cope's Experiments in Musical Intelligence software, and is now experiencing an intense acceleration thanks to the progress of artificial intelligence: in June, Google released the first song composed by the computers of its Magenta project (an MP3) much more rudimentary than songs by Sony/FlowMachines/Carré, but generated almost automatically by the AI system, without input or radical post-human production); on the artistic front, in recent days Brian Eno - pioneer of "generative music" - has published a video in which artificial intelligence chooses the images to be placed next to his latest album The Ship

Machines can make a lot of money in the Stock Trade

One of the biggest trends among asset managers in recent years has been the use of artificial intelligence in managing investments. These companies have employed powerful computers to mine massive data sets—including corporate commentary, social media chatter, credit-card data, and other statistics difficult for humans to discern patterns—and then developing portfolios based on that analysis.
If computing power and data generation keep growing at the current rate, then machine learning could be involved in 99 percent of investment management in 25 years. Asset managers are expected to continue adopting advanced-computing technology in their portfolio construction, but the trend suggests a question that’s similar to the situation currently unfolding in the ballpark: if everyone is using AI, then does the benefit of using AI evaporate?

Machines can paint masterpieces

Have you ever heard the word Inceptionism? A group of 29 paintings made by Google artificial intelligence was sold at a charity auction in San Francisco over the weekend, with the priciest artwork of the night receiving an $8,000 winning bid, reports the Wall Street Journal.
The paintings, which are almost like a computer’s dreams, are created using Google computers through a process its creators have dubbed “Inceptionism,” in reference to the “neural network architecture” proposal used in the project.
Here, the computers’ artificial neural networks are designed to learn from example data. The networks are fed a large number of images, and over time are able to recognize visual patterns.
Based on what the neural networks learn, they can create new works of art. The auction website describes this process as “essentially ‘imagining’ images based on the learned rules and associations.”
Of course, the process also involves input from the eleven Google engineers and artists running the program. If the network starts to identify a particular image in the data, the operators will feed it images that will encourage that perception. So when the computer thinks part of the sky in Vincent van Gogh‘s Starry Night looks like a bird, the human team runs with it.
The team has developed techniques which can be used to create unique images, trippy fractals, or to generate a new work based on the style of an existing painting.

Machines can write poetry

Google's AI interface read nearly 3,000 novels, and came up with poetry? Yes!
"The experimental parameters are simple and might actually make for a fun group writing game of some sort. The team gave the AI a starting sentence and an ending sentence. Then they asked artificial intelligence to bridge the two concepts using up to thirteen additional sentences. In a sense, they gave it a beginning and an end and asked it to tell a story," writes Android Authority's John Dye.
What's Google Brain, first of all? This is explained by Simon Reichley At MobyLives:
Google Brain is the deep-learning branch of the top research efforts. The AI technologies being produced there are used in Google’s voice recognition software, as well as in photo and video searching.
In order to create a more approachable AI interface for search and email apps, Google researchers have been force-feeding their AI programs thousands of romance novels. Because there’s no better representation of authentic American vernacular than the smutty, sweet nothings of Nora Roberts. And that's one example of a poem created by Google Brain:

there is no one else in the world.
there is no one else in sight.
they were the only ones who mattered.
they were the only ones left.
he had to be with me. she had to be with him.
I had to do this. I wanted to kill him.
I started to cry.
I turned to him.

Machines can write movie scripts

Benjamin is a computer that writes a film. Today this is possible thanks to a type of artificial intelligence similar to that used for smartphones. The film's plot is based on a very simple concept, a man standing between the stars sitting on the floor. This is one of the unique guidelines ever written for a science fiction film, which will be called "Sun Spring".
The Short Film, completely written by an artificial intelligence, lasts only nine minutes, but in this short period of time, the images to see are really interesting and alternate irony with non-sense dialogues. Everything justifiable since it is written by a computer.
The film was created for the annual London Film Festival and made its debut at the Ars Tecnica.
To produce the film, director Oscar Sharp and his collaborator Ross Goodwin, a researcher at NYUAL, collaborated with artificial intelligence, providing him with adequate education to write the science fiction film.
In the first minutes of the film, the viewer will see an eye vomited in the hand of the main actor in the middle of a conversation. A few minutes later, a tablet will appear on the screen, probably with an app showing a skeleton projected onto a green screen, and in addition to the title and plot, Benjamin has also given a name to the three characters who are the protagonists of the Short Film.
Benjamin is currently working on a third film that this time will be set in a post-apocalyptic scenario and will simply be called "The Squires of the Landscape". Now we just have to wait for the next film!

Machines can translate better and better!

Do you often use Google's online translator but are not satisfied with the results? If so, know that there is a viable alternative. It's called DeepL, it translates better than Google and it's not made by a big computer science company but by a startup.
The translator, launched in August 2017, offers 42 combinations from seven languages: French, English, German, Spanish, Italian, Polish, Dutch, Spanish and German. But Chinese, Russian, Japanese and Portuguese are coming.
Translations are already much more natural than Google Translate's, according to the results of the "Blue" reference tests (Bilingual evaluation understudy), proposed by DeepL and confirmed in particular by TechCrunch.
For this result, DeepL relies on its first product, Languages, a database of translated texts already collected through Webcrawler. Since its launch in 2009, Languages has collected a billion translations and "has responded to over 10 billion requests from over a billion users," DeepL assures.
This treasure allows you to form a series of translation algorithms that compare and correct each other.

Machines can discover new planets

A Google machine learning system has allowed us to discover a new planet in a different solar system from ours, through the analysis of data provided by NASA's Kepler Space Telescope. The new planet was called Kepler-90i and orbits around its star (Kepler-90) at 2,545 light years from us, making a full tour every 14.4 Earth days (the Earth takes just over 365 days for each complete orbit around the Sun). NASA researchers had known about the solar system for a long time, but the analyses conducted so far had made it possible to identify 7 planets, while now Google's system has managed to discover an eighth. This was achieved by submitting to artificial intelligence a large set of data on planets discovered so far by NASA thanks to information provided by Kepler so that the software could learn to find new ones itself.

Kepler-90i is about 30 percent larger than the Earth, but it doesn't have a suitable environment for life, at least as we know it. The planet orbits so close to its reference star that it has a surface temperature of over 400 °C, more or less comparable to that of Mercury, the planet of our solar system closest to the Sun.

Observation of such distant planets is impossible, so Kepler does not observe them directly, but detects the variations in the luminosity of the stars due to the passage of the planets in front of them (transit). When a planet passes in front of its star, it leads to a momentary reduction in apparent luminosity for those who are observing it from a distance: by measuring the intensity and frequency of the variation in light, Kepler can determine whether the change is due to the passage of the planet or to the behaviour of the star. This method also allows hypothesising the size and characteristics of the open planets.

Google's artificial intelligence has learned to interpret data, to signal or not the presence of a potential new planet. NASA already uses automated systems to help researchers find the planets in the huge amount of data Kepler provides, but they are not always accurate enough. The artificial intelligence developed by Google could in the future contribute to the discovery of many more planets, analyzing the variations in brightness collected in recent years by Kepler and which refer to over 150 thousand stars.

 

What is your
evidence
of reality
beyond that
of your senses?

A book about stories, or simple facts of life of people, in their quest to answer a simple yet tricky question:
"Is our world real?"
Through the voice of these people I will try to convince you that this world is a perfectly programmed (be it for fun or as an experiment) , high resolution, experience

Image is not available
Slider