Sunday, March 8, 2020

AI MACHINE LEARNING & DEEP LEARNING




 MACHINE LEARNING & AI   

What Is The Difference Between Data Science And Machine Learning?
One of the most common confusions arises among the modern technologies such as artificial intelligence, machine learning, big data, data science, deep learning and more. While they are all closely interconnected, each has a distinct purpose and functionality. Over the past few years, the popularity of these technologies has risen to such an extent that several companies have now woken up to their importance on massive levels and are increasingly looking to implement them for their business growth.
However, among aspirants, there seem to be clouds of misconceptions surrounding these various technologies. This post will help you get a clear picture of what the two diverse yet closely associated technologies are all about.
Data Science
In straight forward words, is an information science  handling and investigation of information that you create for different bits of knowledge that will fill a heap of business needs. For example, when you have signed in on Amazon and perusing through a couple of items or classes, you are producing information. This information will be utilized by an information researcher at the back end to comprehend your conduct and push you retargeted notices and arrangements to get you buy what you perused. This is probably the most straightforward execution of information science and it continues getting increasingly intricate as far as ideas like truck surrender and that's only the tip of the iceberg.

Data science includes the procedures of
Data extraction
Data Cleansing
Analysis
Visualization
And significant Insights age
An information researcher is liable for being as curious as conceivable with the informational index close by to make the most unusual of business association. Huge amounts of bits of knowledge lie unnoticed in monstrous lumps of information and it is information science that reveals new insight into regions like client conduct, operational
A data scientist is responsible for being as inquisitive as possible with the data set in hand to make the weirdest of business connection. Tons of insights lie unnoticed in massive chunks of data and it is data science that sheds new light on areas like customer behaviour, operational shortcomings, supply-chain cycles, predictive analysis and more. Data science is crucial for companies to retain their customers and stay in the market.
A I
For basic cognizance, comprehend that AI is a piece of information science. It draws angles from insights and calculations to take a shot at the information produced and extricated from various assets. What happens frequently is information gets produced in gigantic volumes and it turns out to be absolutely dreary for an information researcher to chip away at it. That is when AI comes without hesitation. AI is the capacity given to a framework to learn and process informational collections independently without human intercession. This is accomplished through complex calculations and strategies like relapse, administered bunching, guileless Bayes and that's only the tip of the iceberg. Perhaps the easiest use of AI can be found on Netflix, where after you watch a few TVs arrangement or motion pictures, you could discover the site suggesting you shows and movies dependent on your inclinations, likes and interests.
To turn into an AI master, you have to have information on measurements and likelihood, specialized abilities like programming dialects and coding, information assessment and demonstrating aptitudes and the sky is the limit from there.
Information science is a sweeping term that incorporates parts of AI for usefulness. AI is additionally part of man-made consciousness, where a particular arrangement of direction is met on an unheard of level.
Machine language









Double

In some cases alluded to as machine code or item code, machine language is an assortment of paired digits or bits that the PC peruses and deciphers. Machine language is the main language a PC is equipped for comprehension.

The specific machine language for a program or activity can vary by working framework on the PC. The particular working framework will direct how a compiler composes a program or activity into machine language.

PC programs are written in at least one programming dialects, as C++, Java, or Visual Basic. A PC can't straightforwardly comprehend the programming dialects used to make PC programs, so the program code must be accumulated. When a program's code is arranged, the PC can comprehend it in light of the fact that the program's code is transformed into machine language.

Machine language model

The following is a case of machine language (paired) for the content "Hi World".

01001000 01100101 01101100 01101111 00100000 01010111 01101111 01110010 01101100 01100100

The following is another case of machine language (non-twofold), which will print the letter "A" 1000 times to the PC screen.

169 1 160 0 153 0 128 153 0 129 153 130 153 0 131 200 208 241 96

Machine language is the lowest-level programming language (except for computers that utilize programmable microcode). Machine languages are the only languages understood by computers.

Why Humans Don't Use Machine Language
While easily understood by computers, machine languages are almost impossible for humans to use because they consist entirely of numbers. Programmers, therefore, use either a high-level programming language or an assembly language. An assembly language contains the same instructions as a machine language, but the instructions and variables have names instead of being just numbers.

machine language
A programming language is a vocabulary and set of grammatical rules for instructing a computer or computing device to perform specific tasks. The term programming language usually refers to high-level languages, such as BASIC, C, C++, COBOL, Java, FORTRAN, Ada, and Pascal.

Each programming language has a unique set of keywords (words that it understands) and a special syntax for organizing program instructions.

Programming Language

High-Level Programming Languages
High-level programming languages, while simple compared to human languages, are more complex than the languages the computer actually understands, called machine languages. Each different type of CPU has its own unique machine language.

Lying between machine languages and high-level languages are languages called assembly languages. Assembly languages are similar to machine languages, but they are much easier to program in because they allow a programmer to substitute names for numbers. Machine languages consist of numbers only.

Lying above high-level languages are languages called fourth-generation languages (usually abbreviated 4GL). 4GLs are far removed from machine languages and represent the class of computer languages closest to human languages.

Converting to Machine Language
Regardless of what language you use, you eventually need to convert your program into machine language so that the computer can understand it. There are two ways to do this:

1) Compile the program.
2) Interpret the program.
Recommended Reading: See compile and interpreter for more information about these two methods.
The question of which language is best is one that consumes a lot of time and energy among computer professionals. Every language has its strengths and weaknesses. For example, FORTRAN is a particularly good language for processing numerical data, but it does not lend itself very well to organizing large programs. Pascal is very good for writing well-structured and readable programs, but it is not as flexible as the C programming language. C++ embodies powerful object-oriented features, but it is complex and difficult to learn.
The Top Programming Languages?
According to IEEE Spectrum's interactive ranking, Python is the top programming language of 2017, followed by C, Java and C++. Of course, the choice of which language to use depends on the type of computer the program is to run on, what sort of program it is, and the expertise of the programmer.
Top Programming Languages
A programming language that is once removed from a computer's machine language. Machine languages consist entirely of numbers and are almost impossible for humans to read and write. Assembly languages have the same structure and set of commands as machine languages, but they enable a programmer to use names instead of numbers.
Each type of CPU has its own machine language and assembly language, so an assembly language program written for one type of CPU won't run on another. In the early days of programming, all programs were written in assembly language. Now, most programs are written in a high-level language such as FORTRAN or C. Programmers still use assembly language when speed is essential or when they need to perform an operation that isn't possible in a high-level language.
A machine language or an assembly language. Low-level languages are closer to the hardware than are high-level programming languages, which are closer to human languages.
AI and Machine Learning and machine learning algorithm
Artificial Intelligence and Machine Learning are the terms of computer science. ... Machine Learning : Machine Learning is the learning in which machine can learn by its own without being explicitly programmed. It is an application of AI that provide system the ability to automatically learn and improve from experience
Utilizing AI and AI to examine expressive music execution: venture study and first report
 This article presents a long‐term inter‐disciplinary look into venture arranged at the crossing point of the logical orders of Musicology and Artificial Intelligence. The objective is to create AI, and specifically AI and information mining, techniques to examine the mind boggling marvel of expressive music execution. Defining formal, quantitative models of expressive execution is one of the enormous open research issues in contemporary (experimental and subjective) musicology. Our venture builds up another heading right now: utilize inductive learning strategies to find general and legitimate articulation standards from (a lot of) genuine execution information. The task is at present beginning its third year and is intended to proceed for at any rate four additional years. In the accompanying, we clarify the essential ideas of expressive music execution, and why this is such a focal wonder in music. We present the general research system of the task, and talk about the different difficulties and research openings that rise right now. We at that point quickly portray the present condition of the venture and rundown the principle accomplishments made up until this point. In the remainder of the paper, we talk about in more detail one specific information mining approach (counting another calculation for learning characterisation leads) that we have grown as of late. Primer trial results exhibit that this calculation can find general and vigorous articulation standards, some of which really establish novel disclosures from a musicological perspective.

Artificial Intelligence is a broad field of research and study with the aim to create intelligent machines to help human beings solve many challenging problems in computer science, software engineering and operations research. It is still evolving. Machine Learning is a subset of AI and focuses on a narrow range of applications. ML algorithms try to fit a function to the data that best explains the relationship between the input and the output variables. The wide range of ML algorithms allow for finding simple (linear) or complex (non-linear, rule-based etc.) interaction of input and output variables, which could be used for predicting future patterns using new data. The machine learning techniques comprise an essential part of the data science toolbox. The field of data science combines machine learning with big data, distributed computing capabilities and programming skills along with expertise in statistical models and quantitative techniques. Data science, in all practicalities, aims to understand and analyse actual phenomena with data. 
Machine Learning (ML)
is a subset of AI. It is essentially a combination of mathematical algorithms & statistical models that power AI. ML algorithms enable computers to learn from the available data to make predictions and inferences without requiring explicit programming instructions to perform the required tasks. These ML algorithms can be broadly categorised as under.
Deep Learning:
Deep learning is actually a subset of machine learning. It technically is machine learning and functions in the same way but it has different capabilities.
The main difference between deep and machine learning is, machine learning models become better progressively but the model still needs some guidance. If a machine learning model returns an inaccurate prediction then the programmer needs to fix that problem explicitly but in the case of deep learning, the model does it by himself. Automatic car driving system is a good example of deep learning.
Artificial intelligence:
Now if we talk about AI, it is completely a different thing from Machine learning and deep learning, actually deep learning and machine learning both are the subsets of AI. There is no fixed definition for AI, you will find a different definition everywhere, but here is a definition that will give you idea of what exactly AI is.
“AI is a ability of computer program to function like a human brain ”
AI means to actually replicate a human brain, the way a human brain thinks, works and functions. The truth is we are not able to establish a proper AI till now but we are very close to establish it, one of the examples of AI is Sophia, the most advanced AI model present today. The reason we are not able to establish proper AI till now is, we don’t know the many aspects of the human brain till now like why do we dream ? etc
Deep Learning | Introduction to Long Short Term Memory
Long Short Term Memory is a kind of recurrent neural network. In RNN output from the last step is fed as input in the current step. LSTM was desgined by Hochreiter & Schmidhuber. It tackled the problem of long-term dependencies of RNN in which the RNN cannot predict the word stored in the long term memory but can give more accurate predictions from the recent information. As the gap length increases RNN does not give efficent performance. LSTM can by default retain the information for long period of time. It is used for processing, predicting and classifying on the basis of time series data.

Structure Of LSTM:
LSTM has a chain structure that contains four neural networks and different memory blocks called cells.Information is retained by the cells and the memory manipulations are done by the gates. There are three gates –
Forget Gate: The information that no longer useful in the cell state is removed with the forget gate. Two inputs x_t (input at the particular time) and h_t-1 (previous cell output) are fed to the gate and multiplied with weight matrices followed by the addition of bias. The resultant is passed through an activation function which gives a binary output. If for a particular cell state the output is 0, the piece of information is forgotten and for the output 1, the information is retained for the future use.

Input gate: Addition of useful information to the cell state is done by input gate. First, the information is regulated using the sigmoid function and filter the values to be remembered similar to the forget gate using inputs h_t-1 and x_t. Then, a vector is created using tanh function that gives output from -1 to +1, which contains all the possible values from h_t-1 and x_t. Atlast, the values of the vector and the regulated values are multiplied to obtain the useful information


Output gate: The task of extracting useful information from the current cell state to be presented as an output is done by output gate. First, a vector is generated by applying tanh function on the cell. Then, the information is regulated using the sigmoid function and filter the values to be remembered using inputs h_t-1 and x_t. Atlast, the values of the vector and the regulated values are multiplied to be sent as an output and input to the next cell.
mplementing Deep Q-Learning using Tensorflow
Prerequisites: Deep Q-Learning
This article will demonstrate how to do reinforcement learning on a larger environment than previously demonstrated. We will be implementing Deep Q-Learning technique using Tensor flow.

Note: A graphics rendering library is required for the following demonstration. For Windows operating system, PyOpenGl is suggested while for Ubuntu operating system

Conclusion

As more companies are migrating to automated and machine lead algorithms to both reduce costs and eliminate errors, machine intelligence and artificial learning has become a leading sector of employment. There are scores of big data jobs that are not being filled due to the lack of qualified personnel. Within the world of automation, machine learning and AI are one 

No comments: