Chomsky Hierarchy – Computerphile

chomsky hierarchy
This is a topic that many people are looking for. thevoltreport.com is a channel providing useful information about learning, life, digital marketing and online courses …. it will help you have an overview and solid multi-faceted knowledge . Today, thevoltreport.com would like to introduce to you Chomsky Hierarchy – Computerphile. Following along are instructions in the video below:


Recently weve had a request saying can Computerphile do something on finite state automata. And in considering what to do about that it did seem to me that it would be a good idea to look at where finite-state automata sit in the scheme of things. Weve done a lot about Turing machines for example, weve covered the fact that really every single computer, of any sort nowadays, is a Turing machine. So what Id like to do is to refer to a diagram which shows, if you like, the various types of Turing machine – a hierarchy where if you go inside you make less and less demands on what you need. And, if you look at this set of circles here, I can even related it back to some of the videos weve done previously. You will remember that when we were doing the Ackermann function we eventually decided that its of the Recursive sort, which means it will terminate but it could take an awful long time. If you remember in leading up to that, I said theres a certain sort of Turing Machine, outside that, which is a real so-and-so which says: “Sometimes the algorithm you give me will give an answer – and youre happy, but sometimes I will go into a loop and when you say `How can I detect, in general, that you are in

a loop?, the answer is `You cant! ” And weve done other videos with my colleague Mark Jago all about the Halting Problem as its called and then we went one even worse: weve been out in the outer perimeter of hyperspace here, to say are there some problems that are so awful that no algorithm can exist? And we did the Busy Beaver if you remember which is a sort of encoding of a particular Turing Machine. It says: “Look, theres not a general algorithm here – Im not trying to do n factorial. What Im asking is, for machines of this sort, can you predict how many zeros it will print out?” And the answer is there isnt an algorithm that can say how those Busy Beaver programs will behave in general – if only there was! What you have to do is to run them all and just exhaustively say “I dont know, there isnt an algorithm, just try them”. What happened after Godel and Turing and others, in the nineteen thirties, did all this, is people started saying “Well these Turing Machines, yknow, its wonderful – theyre a pencil and paper thing but you could imagine building hardware to do them and, of course. those are general-purpose computers as we now know them. But people said: “Is there a sort of subset of Turing Machines where you can

either say it doesnt need more than a definite amount of RAM – guaranteed – that would be nice to know. ” And those come out to be in this inner circle of Type 1, here, Then people started to say;”Hey theres this thing called a pushdown store, which the Americans call a stack. Thats a one-ended memory device. You cant dip into it, arbitrarily. You can either take something off the top or push something new on the top. So any addition or reading of your memory can only be done at the top of the stack. Is that a sort of special case? Yes it is. >> Sean: And weve looked at that with your Towers of Hanoi havent we? >> DFB: Yes, Towers of Hanoi is a classic example of something where you just want to get hold of the whole bunch of disks and sort them, in your hand, like RAM, y know, just park that one there, store that one there, think about it put them back together and plonk them back on the rod. But you cant do that! You can only do it one-ended and for something that yknow, “I could do that in two or three moves if only I could take all the discs off”, you end up having to do 64 moves. And then theres one right in the middle in

the inner circle here, that needs no memory at all – in principle. And thats what these finite-state automata are all about. You might ask “Who discovered all this – who filled in all these gaps?” Because there we are with Turing, who perversely discovers the most general thing going, in the nineteen thirties. But people dont know the simpler story underneath. Well the person who discovered it is still with us. I think he must be in his late eighties now – his name is Noam Chomsky and I think my friends say that you ought to pronounce it “Homski” like the “ch” in [Scottish] “loch”. But Im happy to be put right on that. Hes genius – near genius – guy, I think hes been at Harvard, MIT, places like that ever since he was young. He really was talented. Hes a linguistician. If you study the structure of natural languages – any languages, computer languages even, I think Im right in saying youre a linguistician. Well, in the late nineteen fifties he started saying: “Look, to understand natural languages better, Im going to look at the most restrictive form of language I can think of”. Yknow really simple things. How about a language whose words are just strings of the letter i. So iii is a word; five is, iiiii, is a word. Any number of is is a

simple can that be? Yes, very simple. And then he went on to say things like “Yeah, whats a bit more complicated than that?” Because those very simple languages as well find, next time, dont need any memory at all – they really dont. And whats the one that sits outside – he did more investigations and said: “Ah! there is one where a one-ended memory will work”. Yeah – these are the Chomsky Type 2. So, remember Chomsky – it goes, as it were, the opposite way around. Type 0 is the most general, right, the Recursivley Enumerables. A subset of Type 0 is the Recursively Enumerables that really do terminate – e.g. Ackermann. Type 1 is the one where it needs RAM but you can predict how much RAM. So he discovered that sort – thats the Type 1 i.e. Turing Machines with a predictable and finite amount of RAM requirement. And he just filled in the whole picture. And in the period really from about 1959 to the middle 1970s a huge amount of work filling in the middle of this diagram. And that includes things like computer languages, Algol, how to parse them, how to compile them. And it was all filled in – in the middle of this diagram. But all basically referring back to that work that Chomsky did in 1959, saying: “These are the language varieties”.

tags:
computers, computerphile, computer, science, noam chomsky, chomskys hierarchy, linguisticians, finite state automata, university of nottingham, dfb
Thank you for watching all the articles on the topic Chomsky Hierarchy – Computerphile. All shares of thevoltreport.com are very good. We hope you are satisfied with the article. For any questions, please leave a comment below. Hopefully you guys support our website even more.

Leave a Comment