Interview with John Hennessy
June 29, 2005
Stanford, California

RW: John Hennessy is a pioneer in the fields of computer architecture, microprocessor design and microprogramming. In this 2005 interview, Doctor Hennessy describes his high school development of a tic-tac-toe player which set him on the road to computer design and his college studies leading to a PhD and to Stanford University. Through his many VLSI projects at Stanford, he gained an appreciation of microprocessor design, leading up to a leave of absence to co-found RISC computer maker, MIPS. He describes the RISC- CISC wars and how with the benefit of Moore 's Law and Intel's marketing and production cloud allowed them to prevail in the desktop and server markets with the ancient X-86 architecture. He tells of his rise through Stanford from professor to dean to provost and, most recently, to president and why Stanford is the most entrepreneurial university in the world.

 

RW: Can you tell me a little bit about how you - your family and growing up? That sort of thing.

 

JH: Sure, Rob. I grew up - I was born in New York City , but grew up mostly on Long Island . My father was an engineer, working in the aerospace industry. That certainly started a lot of my early interest in science and engineering. I grew up in a family of six, first in the family of six and really began to get more interested in science and engineering in middle school and my high school years. Probably the transformative experience I had was a project that I undertook with my best buddy in high school where we built a computer that played tic-tac-toe using all surplus relays, because at that time, using real integrated circuits was completely out of the question on our budget, given what they used to cost for even modest chips. But we found that you could buy surplus relays very cheap through mail order and so we used surplus relays to build this machine. And I still remember part of it was the presentation, so w put it in a box. We covered the outside of the box with black contact paper and then put red and green lights on it for the computer versus the user. And I still remember how astounded my fellow students were that a box could beat them at tic-tac-toe!

 

RW: So then you went to college.

 

JH: Then I went to college. I - after looking at several places, decided on a - to go to Villanova to try to get a balance between a campus that I liked to be on - it was where my father went. But - an - but still be in a place that had a good engineering program. Started out - I had already gotten fairly interested in computers, that was a good opportunity to continue to pursue that and to spend long nights in the computer center. This is, of course, long before personal computers. The only computer was the big one in the computer center. But we discovered that if you were willing to stay up till midnight, you could get your hands on the computer after all the university's administrative work had been had been done and, basically, have it to yourself between noon and four o'clock in the mo - between midnight and four o'clock in the morning. And that was a great way to build your skills and enable you to - to do things and got me very interested. I then - I then had an opportunity to participate in an undergraduate research project, starting in my junior year. We had had a designer from Burroughs who had come back to Villanova to do his master's degree and he was working on an early microprogrammable machine, in some sense, what you might think of as the predecessor to what later would become microprocessors. Multiple chip, implemented with a bit-slice style technology, but basically a little computer, which, with higher levels of integration, you could eventually see how to shrink it into a single, a single chip. And I got to work on that project for a while and that really inspired me then to think about graduate school and begin to think about an academic career. So I then started to look around and I had a - I had another challenge. My wife and I started dating in our senior year in high school, so we had ended up - by that time, we had both decided to go to colleges that were hundreds of miles apart. So we were separated during - during our undergraduate years, but with the passion of youth, desirous to bring ourselves back to the same location. And she had finished her teaching degree, so we managed to - a - and I had finished my undergraduate degree in three and a half years and she had finished her teaching degree in three and a half years. So we were both trying to get back together and I ended up going to Stonybrook because they would admit me in the middle of the year and they gave me financial aid. But I had the wonderful opportunity of within six months, walking into something that just turned out to be an outstanding PhD thesis opportunity. It was the early days of microprocessors, the first microprocessor for coming out, 8008. Before the 88, even. And we were beginning to think about using those in industrial control applications and the challenges of building the software system for them. And there was a guy at Brookhaven National Labs who was worried about how to do bone density scans for people who were working near the accelerator and with atomic materials - radioactive materials and ensure that they were not getting long term, large doses - long term doses of radiation that could be harmful. Well, this is a tricky problem because you basically do a bone scan by taking an X-ray, but you want to take it with the lowest possible intensity, you don't want to overdose, you want to control that. And you actually want to move the X-ray machine, scan it over, so it's basically a real time control problem. In those days, the software for microprocessors were quite primitive. Most people were still programming largely in assembly line, which - so I - my thesis turned into building a programming language that was designed to do real time control and at a higher level. So people could say I need this certain kind of response when a particular event occurs. I need to guarantee that this task will be completed in this time. And as I was in the final, finishing stages of my thesis, a number of legendary figures in the field decided this whole area was a critically important problem. So I was working on exactly the right area at exactly the right time. I ended up deciding to go on the academic circuit. I interviewed all over the all over the country, sixteen different institutions. Stanford was the very last place I interviewed and, by that time, having come to California, seen what California's like and, of course, Stanford was - already had a tremendous reputation in both Electrical Engineering and Computer Science. So when I got the offer, I just decided to come.

 

RW: And so what - what did you do then?

 

JH: I started as an assistant professor in Electrical Engineering. I , and early on, I was teaching both, initially, microprocessor lab courses in - in the early days of microprocessors. A combination of hardware and software design that students would do. I then taught for several years, oh, a variety of different things. A systems programming course, then I started teaching the compiler course for a while. And it wasn't - I actually didn't move into the computer architecture field until about - until a few years after I had been at Stanford. And I moved - I ended up moving in in a strange way. At that point, I was working primarily in the compiler technology area, looking at new way to approach compiler problems, register allocation and optimization. And DARPA had just started funding the VLSI projects at universities and we had started our first courses and were beginning to get research funding. Forrest Baskett had recruited Jim Clark to come to Stanford and Jim began working on this vision that eventually became the Geometry Engine, of really building a processor that would integrate all the functions and dramatically reducing the cost of doing 3-D real time graphics. So Jim was working on the project. Well, one of the problems with this is that you needed a fair amount of hard-wired microcode that did all these geometric transformations and took high-level commands from a processor, from a general purpose processor and converted them into operations and then transmitted out the result. So there was a lot of microcode to write. It was fairly complicated. The engine - the execution engine had to be pipelined. You didn't really want people have to think about the low levels of details - low-level details of the pipeline. And of course, at the end, you needed to generate a finite state machine that you could then synthesize with lots of commonly available tools for doing that. So I ended up building the tool to do this, a system called SLIM, Stanford Language for Implementing Microcode, which basically provided a high level language for writing finite state controllers. So I worked on that project and that got me interested in the whole area of VLSI and what was happening there and that - that transformation. And I then did several other things before I got back to it. We - I worked with Silicon Compilers people early on, as Carver Meade was just beginning that company and - and John Doerr, John Doerr was acting president at that time and helped them with their very first - their second design. Their first design was an Ethernet controller. But their second design was the microengine which because the MicroVAX 1 and I did a lot of the internal design work on that, using what I had learned both from my VLSI experience as well as my earlier undergraduate experience on microcoded engines and how they work. So I worked on that. That got me a lot more interested in architecture and on the potential of microprocessors. So as we finished the Geometry Engine project, the Stanford crew got together and said well, what should we do next? And I said well, you know, we know a lot about compiler technology and I'll - I'll bet you that if we used that, we could take a new look at how microprocessors should be designed. Not coming from the assembly line, which level view of it or the - but thinking about a processor as a target for a compiler. I also - early on, I think one of the other things we grasped was that microcode interpretation was often an unnecessary step in the process because what it really did was dynamically translate from one instruction set to another instruction set. Somewhat lower level, but it really did that translation. And I think we grasped that - I understood from my compiler background that a compiler could do that just as well as a - as a set of microcode. And by moving the function from - from run time to compile time, we could gain some efficiency in - in doing it.

 

RW: Well, you would only compile once.

 

JH: You'd only compile it once rather than dynamically translate it every single time.

 

RW: Each - each time.

 

JH: Right. And we didn't think that that was the solution for necessarily all things. And there are operations like floating point operations, where you really do want a microcoded engine to deal with it in order to get the compression of instruction bandwidth and because you probably have some fairly odd microinstructions that have no other purpose in the world. But in many cases, we saw operations which could be implemented almost as efficiently in - in a simpler instruction set, which we then hoped could be interpreted more quickly. So that led to the birth of the - that Stanford MIPS project. We knew of the Berkeley RISC project, but from the beginning, there was a fundamental difference between the Stanford project and the Berkeley project, which was we had a fundamentally strong compiler group. We - our plan was always to be working on innovations and compiler technology. At the same time, we were trying to look at new computer architectures. And the Berkeley group didn't have that capability as part of its research makeup. In some sense, we were much closer in philosophy to the IBM 801 and their subsequent effort. The main difficulty was that IBM wouldn't let anybody know what was going on in the 801 or what they had discovered or what their concepts were. You could occasionally - and I did have the opportunity to talk to John Koch on a few occasions and we shared some ideas about everything from delayed branches to various ways to get the match between compiler and the architecture closer together. But it was hard to get very many details about the machine because IBM was thinking about whether or not they ought to make a product of it. So we - we developed our own set of ideas internally and lots of the ideas about pipeline scheduling and - and the load/store architecture emerged from trying to get a synergy between what the compiler could do and what the hardware could do efficiently. And that was really the - the origin of most of the concepts and the - in - in - in the Stanford MIPS design.

 

RW: Well, at some point, you took it out of an academic exercise and took it to the marketplace.

 

JH: Hmm. A - a very - a - a lot of serendipity in this - in this process, it turns out. We - so we were publishing our papers. They were well accepted in the academic community. They were largely met with rejection by the - by the industrial community. I think partly - partly our own fault, partly the massive amount of “not-invented-here” syndrome that exists everywhere, in every established company. The part that was our fault was we couldn't explain what the fundamental phenomena that was delivering improved performance in the RISC ideas was versus the - the CISC processors of the time. So we had this academic processor that benchmarked several time faster than a 68000 or an X86. But we couldn't explain why in a very clear cut way and the argument that simplicity made it so, that - that's a - it's true that simplicity plays a role in all this and helps make the hardware faster, but that wasn't a very good - it wasn't a scientific, quantitative argument. The ironic thing is it wasn't on - it was only after the company - MIPS had started and I read a paper by Doug Clark that was an analysis of the VAX-11/780 architecture that I realized exactly what was happening. What was happening is we were able to drop - drive the number of clock cycles per instruction down much faster than the instruction count went up. So the instruction count went up maybe 50 percent, but the number of clock cycles per instruction went down 5, 6, 7, - 8 times and that's where the performance differential came from. But I didn't figure that out till probably two years after we were publishing papers. I think the ideas would have been accepted more quickly had we done so, although the difficulty of whether it not - it wasn't clear to anybody whether you could achieve that same reduction in clock cycles per instruction for a CICS architecture. And the answer was probably - in retrospect, if you look at some of the VAX attempts to do it. You couldn't do it with the same gate count. Many years later when the gate counts were sufficiently large, I think as we've seen in the post-Pentium era, you can - you can achieve it, but with much larger gate counts. In any event, we were - we published our papers, people were skeptical. Forrest Baskett, by that time, had left Stanford to set up the new DEC Research Lab - DEC Western Research Laboratory and was busy trying to build a RISC machine and to transfer it to DEC headquarters and get them to adopt it as something they would build. That effort never quite succeeded, I think. Very hard for a large company that's well established on the East Coast to accept a technology from its West Coast small laboratory upstarts and put it in the mainstream. For much the same reason that IBM ended up canceling their first RISC projects internally. So I was working here. John Moussouris, who had worked on the 801 was on sabbatical from IBM and working in our lab and, you know, we had a number of conversations about this technology. I think we both believed that it would eventually take over. When IBM cancelled the RISC project, John Moussouris became rather disillusioned and began to think seriously about not going back because h - he - having worked at Stanford and Ber - and - and IBM on this technology, he really believed in it. And he just was unsure about going back. About that same time - so we had published our papers, we were getting ready to do our next research project, do the normal academic thing - move on and pick a new research area. And Gordon Bell, one of the founders of Digital Equipment Corporation, came along and said you need to start a company because otherwise this technology is not going to get out there. You won't get the big companies to accept it. It's - for two reasons, he said. First, you have the “not-invented-here” syndrome. He says it's also incredibly disruptive to their existing markets. Your can produce a machine which costs one-fifth the cost of a VAX and performs adequately. But they're selling a lot of VAXs for a lot of money. That's going to be very hard for them to deal with. So Gordon was in the process of actually setting up this unusually structured computer company that was sort of a holding company. It had a single marketing and sales group and a variety of different engineering groups that were doing different places in the computer space. And it turned out in the end; this company was not terribly successful. But Gordon got us thinking about doing it and even talked about possibly funding it as one of the companies - I think it was called Encore, I think was the name of it. In any event, that never happened, but Gordon got us seriously thinking about it, so I started a conversation with John Moussouris and called up a guy who had graduated from Stanford several years earlier, Skip Stritter, who had been at Motorola and then at Nestar - and Motorola, he had worked on the architecture for the

68000 - and said you know, we're thinking about this idea, what should we do with it? So we then went to talk to a few people, they encouraged us. And then we went to talk to VC's with a business plan that I can now only laugh about. Our business plan had no market projections in it, no sales goals. All it had was here's what we want to design, here's how long we think it's going to take, here's how much money it's going to cost and here's how much better it's going to be than anything else out there. And we're a couple of - a group of three guy - three engineers; you should give us some money and then help us go recruit an executive team to build this company. Well, that made a lot of venture capitalists very nervous. A - they were also nervous over the fact that the plan of record was for me to take about 15 months off and then come back to Stanford, to take a year off and - but really come back and pursue my academic career. And I just felt I needed to be completely above board with everybody because that was my goal. I did want to do this, I wanted to take this time but I really wanted to be back in the university long term. So - but we did get Mayfield Fund to buy into it. They bought into it and then help us build w - starting with the temporary CEO, an interim CEO, and then build a - a long term - a long-term team. And that was the beginning of - of MIPS as a company.

 

RW: Well, one advantage you have here is that you could walk to the venture capital community.

 

JH: You can.

 

RW: Or ride your bike.

 

JH: You can. And I had known - Skip had known the Mayfield people and I had known them because they had funded Silicon Graphics. So I'd known them through that connection and I could - I could go out there. And we pitched to - we pitched to Kleiner Perkins, to Mayfield, to Sutter Hill, to Technology Ventures, Inc. And Mayfield were the only ones who were really ready to do it, although several of those firms came in, subsequently, at later rounds in - in the funding. But it was an interesting experience. I - you know, the company was certainly successful, but what I didn't know about starting a company, what I didn't know about running companies, what I didn't know about the basics of how these things are financed and wh - what they needed to do - achieve would fill a book. It would fill a book, Rob. And it - the unfortunate thing is had I known - had I had what, for example, Stanford students can get today in some of our courses on entrepreneurship, I could've probably saved the company six months to a year in terms of its development cycle and probably 20 million dollars. And that - that's a lesson to be learned. It's a - so today, when I talk to young students and faculty who are interesting to - interested in entrepreneurship, I try to put - help them not have to learn that lesson the way we had to learn it, which was the hard way. So - but it was a wonderful - a wonderful opportunity. I learned more in that 15 months than I probably learned in any 15 months of my life. It was just - it's so intense and you're exposed to so many different things and in a small company, you find yourself using all your skills. Leadership skills, your ability to engage people. I was the first - I was the first technical salesperson, so I went out with a - a person we had hired, Steve Blank, who we'd hired as a acting Vice-President of Marketing-Sales. I went out and made cold calls on customers, which, if you want to have some training that really will give you the greatest respect for salespeople, that's what you should do sometime.

 

RW: Well, that's - Bob Noyce used to do that.

 

JH: Yeah. So I did that. And because it - in the early days, for a young company, it's very much a technical sell. You really have - you don't have your product yet. You've got to convince people that you have a vision that's going to change how computing happens. We were lucky to go see Prime Computer at a really turning point, where they realized if they didn't do something dramatic, they were going to be out of the computer business and offered them an opportunity to launch into the workstation business with new technology that would not just bring them in as a player, but would bring them in as a leader. And that was really key and they gave us a - they gave us a MOU, you know, less than six months after the company started with a one year design cycle for a microprocessor. That's when you could design microprocessors in a year. And I remember that our target was to deliver by - and we - we signed it about the end of the year. We signed it probably in November or December and our target was to deliver it before December 31 st of the next year. So we got our chips back and, of course, you have this problem of semiconductor line close downs around Christmastime and all these things. We'd just got our chips back, the first silicon worked well enough for engineering samples. We got it in the board, everything worked and we put one of the guys on a plane on Christmas Day to deliver the board back to Prime. He showed up the day after Christmas, he goes to - he goes to Prime Computer and they're closed. So - but the fact that he was there on that day enabled us to make our benchmark and our milestone.

 

RW: Well, you know, you - you brought up silicon compilers and Carver Mead and VLSI courses that were taught; yet those circuits never really were production ready.

 

JH: Yeah. I think we, you know, one of the things I quickly found out, for example, you know, we never used any of the - of the Stanford MIPS design for anything in - w - we basically redesigned it. The first thing we went out when we started to build the company was to go out and build by - hire people who had actually done designs. That - we hired a number of great people from Intel early on that came in and were interested in trying to do something new that had worked on various X86 processors. And it was - we were right in that transition between NMOS and CMOS. The Stanford MIPS design had been an NMOS, but it was clear CMOS was the future. So there was no reason to keep that design around. In fact, we had learned enough things, even about the architecture, that we could improve the architecture right away. So really what we ended up transferring was the compiler technology, so the compiler technology that was built at the company, boy, really started with the Stanford compiler technology. And a lot of what was in our heads, which was as much experience - one of the things I say to people thinking about companies and spinning out things from the university is it's often the lessons you've learned that are important as a particular technology. You know what works and what doesn't work. For example, a lot of people said well, you know, you have this university prototype here, it - you know, it - it barely works as it is. It did work, but it barely works. Doesn't have virtual memory, doesn't have floating point, all these kinds of thing and when you put those in, all your damages will go away. Well, we had thought through those problems well enough to understand how to solve them and could see how to get from here to there. And that insight is just as important as what you've already done in many of these cases.

 

RW: Well, in the history of computers, as you alluded to on the VAX, there - there've been the established players, who have not wanted to obsolete their machines because they were making money. And so they stayed with an old architecture and old technology and eventually that brought them down. And so we all thought that was going to happen to Intel. I certainly did. I sold my Intel stock. I said Intel is a one product company.

 

JH: It is a one product company.

 

RW: And it's going to go just like Data General did and - and Prime and all those other - eventually DEC. Yet that didn't happen. Why? Why didn't that happen?

 

JH: I think for a variety of reasons. First, I think, in some sense, Moore's Law really did provide them with a constant opportunity to enhance things. That was the first. Second, we were in a time when computer power was growing for everybody, even if there was a gap between what Intel could achieve with their architecture and what the RISC guys could achieve. Everybody's computer power was growing significantly and I - I think that was a key factor. I think the growth of the commercial market, which by that I include Web, general processing, all these PC market things like that, as opposed to the scientific market. The much more rapid growth of that commercial market, particularly in the post-Web, post-Internet era really meant that the focus on engineering and scientific applications on the workstation side of the market was not where the big growth opportunity turned out to be. I think that affected it. And I think, eventually, what happened was you got enough gates available on a single chip that the incremental cost of implementing an X-86, basically by layering an engine on top of a RISC engine, which is how all the - since the Pentium Pro, all the all the X-86 architectures are done. The internal machine is a RISC engine and you then layer an instruction cracker on top of it that's very efficient. It actually is a concept that was pioneered in the VAX 8500, which was probably one of the more successful VAX implementations. But by then there is a real difference here, that the VAX is an architecture which is much harder to crack than the Intel architecture. The Intel architecture is odd and unusual, its complexity comes from its lack of uniformity, not from the fact that it necessarily has heavily used instructions that are difficult. It does have some difficult instructions, but almost - they're almost never used. So you - you design something that's basically a RISC like implementation. You interpret those in microcode slowly, doesn't matter because they're never used and you crack the rapid instructions quickly. I think Intel's execution in both technical side, in engineering, and in marketing has been absolutely superb. Absolutely superb. I mean, they've stumbled from time to time, but they've never had marketing and engineering stumble at the same time. There've been times when engineering's behind and they can't get - well, look at 486. I mean, they were very late on the 486. But the marketing engine really held people in there and kept them - and kept them going. So I think they've done an extraordinary job. They've taken advantage of the fact that they've also become a world-class manufacturer. I think Barrett did a great job in making them a world-class manufacturer. That provided a real advantage. And the established marketplace and the volume is such an enormous - and Nick Tredennick told me this early on. He said it won't matter if you're better. If you can't match their volume, you'll never get the prices. And I think early on, that wasn't a gigantic handicap, but what happened as design costs really soared and we began talking hundreds of millions of dollars for a microprocessor development, then volume really did matter because if you've spent a hundred million dollars and you sold a million parts, you got a hundred dollar tax per part in design cost. If you sold a hundred million parts, your tax is only a dollar a part. So that an enormous difference in the whole equation.

 

RW: Well, Intel had tried to obsolete the X-86 three times.

 

JH: Right. Unsuccessfully.

 

RW: The 430 - or so could you do an analysis of the 432 and the i860 and the Itanium?

 

JH: Sure. I think in the case of the 432, they were following the same strategy that had led to the confusion, I think, in the VAX architecture. Namely, what we're going to do is build the hardware up to a higher level to simplify the software. Well, the truth is it is a whole lot easier to write software than it is to build hardware. A whole lot easier. And of course, it's more flexible, evolves better, all these kinds of things. So that while they would deny that the 432 and the VAX were on the same branch of the tree, I think very much they were. They were both influenced by the same line of thought that had gone back to the old symbol machine, actually, and that - that concept.

 

RW: That Fairchild.

 

JH: Yeah. A Fairchild. Eliminating compilers completely. We'll get rid of that software because it's so hard to write. And that's what hardware people - all hardware people think, software is so difficult because it's - in every hardware project, it's the last thing to come along and difficult to make happen. I think in the i860, Intel built a very interesting machine. Had they simply thought about it as a more general purpose machine, rather than focusing more narrowly on the graphics market and a few other markets and had they brought the compiler people in from day one, it probably could've worked. Because it wasn't that different from some of the other concepts that were being - it was more narrowly targeted at floating point. And that probably wouldn't have gotten the breadth of application. The i864 architecture and the Itanium are - is - is a more - more difficult problem. I think that that architecture suffers from probably two problems. One, the incredible success in the Pentium implementations, P6 and on. Secondly, it - it's got everything in it. If there are multiple ways to do something - so there are multiple ways to compile a segment of code, both techniques are supported in the hardware. And the result is that it's hard to write a compiler for it, not because there isn't incredible so - hardware support for various kinds of clever compiler tricks, there is. There's too much support. There's too many different ways to do things. I remember when they first started running benchmarks and I said wow, how do you decide how to compile this? Particularly, how do you decide to compile integer programs, which are very difficult to compile for some of these more advanced techniques as opposed to floating point programs, which tend to be a little more regular. They said well, we try all the combinations of the compiler flags and see what works best. And that's not a workable strategy when you go to an OEM market. I think also they probably focused it too much on floating point and the VLIW ideas which they really built on, work - we know they work for floating point programs. They work much less well for integer programs because there's less predictability in the code string. And they got caught by the same dilemma that everybody else got caught by, the engineering workstation market was growing five to ten percent a year a - and server market and the commercial side was growing thirty percent a year. And guess where all the market opportunities were. So I think all those things served to - and I - I - you know, history has yet to write its final chapter in the book on the i864 and the Itanium, but I'll - I'll bet you that a 64 bit version of the X86 architecture turns out to be much more successful in the long term.

 

RW: It's interesting because the - the latest Pentiums have legacies of the 8008 in them.

 

JH: Right.

 

RW: The address mode with little Indian, big Indian.

 

JH: Yeah. Right.

 

RW: That was because Data Point had a serial input on that.

 

JH: Right.

 

RW: And so it - it's really amazing. And - and the…

 

JH: And only - had Moore's Law not been there, there's no way they could've done this.

 

RW: Yeah.

 

JH: Because there's no way they could've carried all that baggage except for the fact that the incredible exponential growth in density made the baggage cheaper and cheaper and cheaper each year.

 

RW: Well, you - you talked about volume and IBM has captured all three of the game computers.

 

JH: Right.

 

RW: And unfortunately, they're all different.

 

JH: Right.

 

RW: But the cell computer, IBM has just made its first military sale. They have a hundred engineers on it; they're building a motherboard. They are - they have made a sale to an outfit that I never heard of, the Mercury Computer.

 

JH: Ah. Yeah. Military computer supplier.

 

RW: And they're going to use it for radar, oil exploration and medical imaging and they're in talks with Raytheon and Boeing. So with the cell computer, you're going to have very high volume and it's exceedingly powerful on imaging. So this may be a new architecture.

 

JH: Well, we may be - we may in a time when special purpose, or even machines designed primarily for the scientific market, may see a rebirth now because that marketplace is now so small compared to the general purpose marketplace that the X-86 dominates. So small that they can't get any consideration of their design problems into the design of the X-86. It's just not going to happen; it's just not realistic that they'll do that. And they have some fairly unique and different problems. They have applications which often have lots of vector parallism . They have different memory behaviors so that caches don't work as well as they do for more traditional programs. What our challenge is going to be - I think right now, there is something that if somebody could solve the fundamental problem, it could change the entire way in which, let's say, at least high performance embedded machines work, whether they're in radar control or scientific applications or graphics or - and that's to build a processor, which operates almost as efficiently as an application specific design for, let's say, graphics or radar or any of these other applications. Almost - within a factor of two is probably close enough. And yet is more general purpose and more programmable and programmable in high level language, not a lot of grungy, getting in and writing a - a - a libraries and assembly language first so that you can make this thing useable. If we could achieve that, that would be a real breakthrough. That would be a unification of these various, disparate markets, which otherwise always have the problem that a special purpose design targeted for one market just has such a hard time keeping up with the general purpose designs, or even the semi-general, the - the - the single processing DSP's of the world and things. But - but we haven't quite gotten there yet and I think there are a number of very hard technical problems having to do with memory models, having to do - some sense, the only computational model we have that kind of works and is reasonably high level is the vector model. But it's not - it's not quite right. And so there's a lot of interesting thought, I think, that's going into a lot of these things right now. You know, we're also at an interesting time in computer architecture, I think, a - as even Intel will say now. We are probably at the end of the road for - or at least, the slowing down of the rate of performance growth of a single processor. So as Intel's - first we go to these threading ideas and simultaneous multithreading, hyperthreading, whatever you want to call it. And the next step will really be to go to multiple processors on a chip.

 

RW: Right.

 

JH: Once you start doing that, the obvious question that you ask is well, should I put eight, somewhat simpler, but very efficient processors, which will be somewhat slower than maybe a behemoth? Or should I put two to four behemoths on there, which will be more power consumptive, be less efficient, but where each individual processor is faster. And that is completely unknown territory in terms of how you design them, how you program them, what efficiencies can you get?

 

RW: But don't you think it would be ironic if you started your career with a tic-tac-toe player and it would end up that game computers turned out to be the most powerful?

 

JH: Oh, I think it is ironic. I think it is ironic. They, in many ways - in many ways, this has happened. I mean, if you think about - if you think about the performance of various things that are application specific, from game computers to what's in your cell phone, to the - I mean, just look at the number of operations that go on in a 802.11 wireless chip that you can buy for ten dollars. I mean, it makes in Intel processor look slow. Now of course, they're highly tailored to that particular - to that particular operation. Ha - whether or not we can close the gap in - in a way and achieve some of the benefits of being more special purpose while retaining the programming benefits of being more general purpose, that's a big, open, unknown problem, I think, right now. And I've encouraged lots of young people to try to work on it.

 

RW: Okay, well, let's go back. How did you become president of Stanford?

 

JH: No, how did I become president? That's an interesting question. I - I tried to avoid administrative tasks as long as possible, other than running the - the Computer Systems Lab, which was a - well, not a small effort. There were fifteen or twenty faculty in it. But one where I was really wh - wh - a leading faculty over in my area. I knew what they were doing and that was a relatively modest - something I could probably do in 25 or 30 percent of my time. An - and then, at one point, I was asked to be chair of Computer Science and having passed on doing it once, I decided it probably was time to do my civic duty to my colleagues. So I took on that job and that was just as we were planning and building the Gates Building , the new home of the Computer Science department. And I enjoyed doing it and found it challenging and I found I could continue my - my own research and technical work. So when Jim Gibbons stepped down as Dean of Engineering, the search committee came and said well, would you consider taking this job? And I said well, let me think about it because the step from department chair to dean means that, you know - when you're a department chair, you probably ha - still have more than half your time to do your own work, or at least half your time to do your own research and teaching. When you become dean, that number is going to drop significantly. So I, in the end, concluded that - that Computer Science department was in very good shape and that the problems that we faced in Computer Science or Electrical Engineering or - that those are the two departments I had in my appointment and I knew well - were not problems that were local to those departments. They were bigger issues for the - at the school and higher levels. Everything from the cost of living for our faculty and housing issues to competitive compensation to thinking about start-up packages for young faculty, all those kinds of things. So I took the dean's job and I really enjoyed it. I found one of the - I think one of the challenges for a faculty member is you're so used to receiving recognition for your own contributions, that you do with your own team of graduate students and your colleagues. When you go into an administrative role, you have to find joy in seeing your colleagues be successful, in seeing a faculty member get elected to the National Academy, win a major award, in recruiting the best, young superstar, in seeing one of your graduate students and seeing a grad - any graduate student in the school be incredibly successful - win a Rhodes or a Truman scholarship. And I found that I really enjoyed that job. So after Condi Rice stepped down at Provost, Gerhard Caspar was then President, was doing search and called me up one day and said could you come up to Hoover House, the President's house, and meet with me. And he asked me if I'd consider doing the Provost job. So I had to think long and hard about that because that's a big jump. Engineering, while I certainly was not an expert by any stretch of the imagination in what my colleagues were doing across the school, it had two advantages. I basically could know every single faculty member. It's 200, 220 faculty. You can at least know everybody. And you can have some idea of what they're working on, if not a - I mean, you go to be Provost, all of a sudden, you've got historians and art history and you've got the medical school and all these - the history and all these disciplines. It's much harder to - to have that understanding. But I actually was inspired by - it happened that Gerhard asked me to do this right before our Founder's Day celebrations and - which is an event we have every year that commemorates the Stanfords as the founders of the university. And that year, Condi Rice had been asked to be the speaker, since she was leaving us as Provost. And I went and listened to her talk, which was all about how education had transformed her family and her grandfather, who was a black sharecropper, had been transformed by the opportunity to go to school and how she saw education as the vehicle for transforming the country and providing opportunity. And I just thought, you know, that's absolutely right and I should take this chance and take the job. So I took the Provost job. Shortly after I took it, Gerhard Caspar decided that the next year would be his last year as President, which I think surprised everybody because he was - things were going very well and we had just published the new - we had published the Commission on Undergraduate Education and we were implementing all these new undergraduate reforms and - but as Gerhard pointed out, he had served in an administrative role, at the end of that year, it would be 20 years, between Dean and Provost to Chicago and President of Stanford. And having now been in administrative roles for 10 years, I can understand why he felt that way. So the search committee then began doing its work, working through the process and eventually asked me if I would - if I would consider taking the job. So my wife and I talked about it and decided did we want to take this risk and - and make probably what - besides getting married and having children was probably the biggest change ever in our lives because it really does - it really does take over your life in a way that - that no other job I've had since then has had. But I - you know, I believe in the work that Stanford is trying to do. I believe in its role in making a contribution and a positive difference and a change in the world. And so I decided that we'd try it and I said yes when they asked and that's how I became President.

 

RW: Is Stanford the most entrepreneurial university in the world?

 

JH: Well, it's certainly the most entrepreneurial large university that I'm familiar with. I think it's - and where that comes from, I think partly it's the Western roots, it's that pioneering spirit, going all the way back to the Stanfords and being in California . It's partly what's grown up around us, that people see opportunities. As you mentioned earlier, having the venture capitalists out here on Sand Hill Road, all the pieces are in - are in place. I think it's when we think about what makes this a unique institution, I think being a pioneering institution, not just in terms of technology transfer, but - for example, when we put the undergraduate reforms in place, when Gerhard Caspar was President, we were really leading everybody else in terms of doing things that were really rather different and bold and rethinking the role of undergraduates in a great research university. And I regard that pioneering spirit and that willingness to try new things and now you at, with Bio-X, for example, we're looking at a whole new model of how to organize departments and research activities that are on the boundaries between traditional - traditional areas. That, I regard as a fundamental strength that makes us a - a rather unique institution and one we have to take advantage of.

 

RW: What percentage of the students here are foreign born?

 

JH: The - a relatively small number of our undergraduates - probably in our undergraduate population, five percent or so. In the graduate population, overall, about a third, but of course, it's not evenly distributed across…

 

RW: How about in engineering?

 

JH: In engineering, probably more in the mid-forties, with the numbers being close to fifty percent in the PhD programs and less than that in the master's programs.

 

RW: At Cal , Paul Gray told me they have a quota. They can't have more than one-third non-citizens in the graduate programs, by some sort of a law or something like that. Why - what's happening to native-born Americans that they're not entering Computer Science at the numbers they were? Electrical Engineering?

 

JH: I - I think there are several things that are in play here. I think we're not probably doing an adequate job in preparing young people in K-12 to be in engineering and sciences, mathematics. And it really comes down to math and science training. And it's admittedly; it is more difficult now than it was when you or I graduated. When you and I graduated, you could come into an engineering major without calculus, without anything past the very basics of physics. You could probably come in with physics course, but without a pre-calculus physics course. And you could get by. It wasn't ideal, but you could get by, no problem. Today, if you don't have some exposure to calculus and some exposure to calculus based physics course, it's very difficult to finish an engineering degree in four years. And remember that, they are already among the most difficult degrees in the university, the most difficult in terms of course requirements, the rigor of the courses, all those sorts of things. So I think our lack of adequate preparation, which goes all the way back to the fact that until we can really improve what's happening in K-12 by getting talented people being teachers in there, you need somebody who loves mathematics and science teaching young people mathematics and science. And we don't have - the very best scientists and - and mathematicians are not drawn to those careers, ranging in everything from compensation to the difficulty of working and the lack of incentives and rewards for individual contributions. So I think that plays a role. I think there's another difficulty, which is a real conundrum, because I don't have a good solution for it. Lots of young people see that careers in law or business are more highly rewarded than careers in engineering. How this will play out over the long term because it's wonderful to say I aim to be the CEO one day of Intel or General Electric or Microsoft or Google and I'm going to do that by getting a business degree and pursuing that route. Of course, if we don't have engineers in those companies, you're not going to be the CEO of very much at all. That's a very difficult dilemma and one that I think we're going to have to face up to. And it's not - there isn't a simple solution because as the number of engineers continues to grow in Asia and India and China, in particular, I think there's going to be very difficult to raise compensation levels for engineers and scientists in this country. And that - I don't see a simple way out of that problem. But I do believe that if it ever becomes that day that all of our engineering activities are abroad, including our new product developments because that's where the engineers are, well, you may have - you may believe that it's still a U.S. company, headquartered here. But if everything, including the basic R & D is going on abroad, it's going to be a very different kind of situation.

 

RW: Well, thank you very much.

 

JH: Thank you.