ICBTT2002The 1st International Conference on Business & Technology Transfer Top page in Japanese
Top Rationale Key words Topics Day's program Place Committee Sponsor & supports Program
Lecture Registration form Call for papers E-mail photos 1.Banquet 2.Technical Tour
  ICBTT2004 Technology & Society Division, JSME
Return

Technology Transfer and the British Microelectronics Industry, 1950-75: Confused Signals[1]

John F WILSON

By the 1990s, the global microelectronics industry was composed of a myriad range of syndicates, alliances, partnerships, exchange agreements (technical and market-related) and joint ventures. It was an industry in which technological interdependence had become established practice, with the flow of ideas and knowledge becoming multi-directional, taking in the USA, Western Europe and Japan. At the same time, it is noticeable that British-owned firms had been marginalized in this system, having been eclipsed by the much more powerful American and Japanese multinationals that came to dominate microelectronics. Focusing on the British technological and commercial leader, Ferranti, this paper will assess the extent to which American technology was purchased and the impact this had on the industry's performance. It will become clear that in spite of a considerable outlay on licences British firms bebefitted only marginally from access to the world's best technology. One of the reasons behind this was the direction of government policy in this field, given its empahsis on technology leadership, rather than followership. Follwoing the work of Porter and of von Tunzelman, this paper is concerned with how business and technology trnasfer can be best fitted to the values and resources of firms.

Key Words: microelectronics, Ferranti, technology transfer, government policy


By the 1990s, the global microelectronics industry was composed of a myriad range of syndicates, alliances, partnerships, exchange agreements (technical and market-related) and joint ventures.[2] It was an industry in which technological interdependence had become established practice, with the flow of ideas and knowledge becoming multi-directional, taking in the USA, Western Europe and Japan. At the same time, it is noticeable that British-owned firms had been marginalized in this system, having been eclipsed by the much more powerful American and Japanese multinationals that came to dominate microelectronics. This had happened in spite of a plethora of government-funded schemes stretching back to the early-1950s, not to mention the considerable amount of money invested by British firms in licence fees, consultancy services and even acquiring American microelectronics operations. It was plain by the late-1960s that Britain was losing what contemporaries referred to as the 'Electronics War, [3] especially once major corporations like Texas Instruments, Motorola and Fairchild started to target European markets. Even though most accepted the central importance of microelectronics, given the impact these new components was having on most aspects of work and life from the 1950s, a synchronised strategy never materialised, reflecting muddled, and at times over-optimistic, thinking on the part of both political planners and strategic management.
This paper will offer an explanation of why the British microelectronics industry failed to cope with this competition, linking a discussion of technology transfer issues with the pattern of both government funding and corporate strategies. The first section will outline the technological context, explaining how alongside aerospace, automobiles and pharmaceuticals, from the 1940s microelectronics represented the cutting edge of technological progress. Indeed, it was regarded as so central to an industrial economy's competitiveness that governments in the USA, Western Europe and the Far East committed considerable resources to the development of new generations of components. For those firms heavily engaged in electronics, whether as equipment or component manufacturers, it became increasingly essential to spend profusely on both R&D and production activities, tying them inexorably to global trends in an industry that was rapidly becoming central to economic progress. After briefly assessing these global trends, in section two we shall introduce the strategic choices available to British firms. In this context, Porter's work will provide the analytical framework required, focusing especially on whether firms should have pursued either a low-cost producer strategy (a supply-side choice) or product differentiation (linked to demand-side issues). In addition, one might also add the distinction between technology leadership and technology followership that Porter makes, in that it is not always the case that first-mover advantages can provide the kind of lead required in such a fast-moving industry.[4] Thirdly, we shall assess how the resource-based view of the firm can shed light on this analysis, linking up with the work of von Tunzelman on core competencies and the choices facing, and made by, British management.[5]
While the analysis will be focused on the British electronics industry as a whole in the third section, in section 4 we shall examine a case-study of Europe's leading microelectronics firm, Ferranti.[6] By assessing how this firm coped with the various dimensions of the microelectronics industry, whether political, financial or market-related, it will become clear that public policy and technology transfer issues drove firms down the technology leadership track. In this context, then, the transfer of important new semiconductor advances from the USA to Britain became clouded in the varying aims and objectives of corporate planners and politicians, creating insuperable obstacles in the development of a competitive microelectronics industry. There were also fundamental inconsistencies in the government policies adopted, while the early-1970s hiatus in public funding precipitated a major switch in strategy at Britain's leading firm. Above all, it is necessary to question whether Britain should have attempted to pursue a technology leadership strategy, given the enormous lead gained by American firms and the substantial Japanese microelectronics programmes of the 1960s and 1970s. Furthermore, any simplistic analysis that places all the blame for Britain's failings in this sector on market weaknesses only needs to assess how in the 1970s Japanese suppliers overtook the American first-movers. While for British firms it was clearly essential to acquire the technological competencies developed in the USA, in retrospect one must question the use to which these resources were put at the level of the firm.
By linking the case-study of Ferranti into more general developments, it will consequently become clear that for a variety of reasons technology transfer rarely worked in the interests of the British microelectronics industry. This raises the question of whether so much public and private money should have been poured into this sector, given the availability of cheap and reliable components from American, and later, Japanese, suppliers. It also assesses the political dimension to technology transfer, because for those British electronics firms involved in buying patents and licences from the USA, public money frequently acted as the principal catalyst. Of course, other factors like the relatively small and conservative nature of the British market undoubtedly played a major role in undermining competitiveness, emphasising the complexities involved in analysing the evolving scenario. Above all, though, it is plain that the combination of corporate strategy, technology transfer and governmental priorities failed to create a sufficiently conducive environment in facilitating the emergence of a viable British microelectronics industry.

1. The Emergence of Microelectronics.

Although a large number of authorities have related and analysed the emergence of microelectronics,[7] it is important to trace the key stages as a means of providing the essential technological context. The central determinant in this story was the 'tyranny of numbers's, in that by the 1940s engineers were increasingly constrained by the acute problems associated with using ever-larger numbers of thermionic emission valves. The heat generated by these devices, as well as their fragility, limited the power and size of electronic equipment, forcing engineers to consider the semiconductor as an alternative switching and control mechanism. Semiconductor research had actually been conducted on both sides of the Atlantic during the 1930s: A.H Wilson at Cambridge University produced some crucial insights into the quantum theory of these unusual devices, while William Shockley of Bell Lab's (owned by AT&T) was experimenting on what he called a field-effect transistor that might be used in his parent firm's telecommunications equipment. Although two British firms, GEC and British Thomson-Houston, conducted extensive field trials using Wilson's theoretical concepts, it was not until Shockley was assisted by John Bardeen and Walter Brattain that a point-contact transistor was announced to a disbelieving world in 1948. Once further refinements had been made to this device later that year and Shockley produced the junction (or, bipolar) transistor, one can confidently claim that the microelectronics era had started. As the first devices were made out of germanium, however, it is vital to stress that the era stuttered into existence, because the early transistors were slow and unreliable. Only once the silicon transistor had been developed by Texas Instruments in 1954 did reliability and speed improve to such an extent that equipment designers were willing to use them in significant numbers. Thereafter, though, the adoption of semiconductor technology gathered momentum, forcing electronics firms and governments across the industrialised world to think more extensively about how best to imitate the American achievements.
While the crucial technological developments had clearly taken place in the USA, it is vital to remember that a considerable amount of semiconductor research was being undertaken in the UK. Apart from Wilson's pioneering theoretical work at Cambridge, by the early-1950s all of the leading electrical firms had initiated investigations into the Bell Lab's products.[8] Indeed, it is interesting to see how once Bell Lab's started to make its patents available to rival firms, after an international symposium held in 1952, British firms like English Electric, AEI, Lucas, GEC, Pye, STC and Ferranti all availed themselves of the licensing arrangements initiated by the world's leading semiconductor firm. Over the lifetime of Shockley's junction transistor patent (1948-64), Bell Lab's earned £1.1 million in licence fees from British firms alone, indicating how the leading players were keen to follow the US lead.[9]

In highlighting the extensive licensing of Shockley's work, it is interesting to note that AT&T were motivated by two key stimuli. In the first place, as government regulators were investigating the firm's pricing policies, management decided that a policy of greater openness was required.[10] Secondly, as one AT&T vice-president stated: 'We realised that if this thing [the transistor] was as big as we thought, we couldn't keep it to ourselves and we couldn't make all the technical contributions. It was in our interest to spread it around'. [11] Clearly, AT&T management were hoping that other firms and scientific establishments would extend the utility of their achievements, thereby further adding to both the font of knowledge and expertise in applying such an immature technology. The dispersion policy was also further assisted by Shockley's departure from Bell Lab's in 1953, because by 1957 eight of the team that he recruited had in turn left his firm to found Fairchild Semiconductors.[12] This became a common feature of the semiconductor industry as it evolved from the 1950s, because a multitude of spin-off firms was created as individual engineers or small teams set out on their own to establish off-shoots.[13] While this is an issue to which we shall return in a later section, it is clear that this was one of the crucial reasons why 'Silicon Valley' emerged as the dynamic force behind US semiconductor developments, given the achievement of significant externalities arising from the concentration of expertise and resources around Palo Alto and Stanford University.
The advent of commercially-viable semiconductor technology was clearly beginning to have a major impact on the American and British electronics industries by the early-1950s, prompting even further research into this area. In this context, it is vital to note that the key breakthroughs were process innovations, given the acute difficulties associated with material purification, crystal growth and doping techniques.[14] Once again, though, in spite of the extensive British acquisition of American technology during the 1950s, American firms led the field in developing radical new production processes. The first of these was announced by Bell Lab's in 1956, when at another international semiconductor symposium engineers demonstrated the diffusion process. Another crucial breakthrough was the planar process, developed by Fairchild in 1958, opening up the possibility of creating an entire circuit on a single piece of semiconductor. Once further refinements like the epitaxial process (announced by Bell Lab's in 1960) became available, considerable process had been made in producing the first integrated circuits (IC).
The first of these revolutionary devices was announced independently by Texas Instruments and Fairchild in 1961, precipitating the rapid drive to place ever-larger numbers of devices on a single 'Chip' of semiconductor. This process has come to be known as Moore's Law, because according to Gordon Moore of Intel the number of active elements per chip has approximately doubled each year since 1961, leading in stages to medium-scale integration (between 100 and 999 elements) and ultimately to very large-scale integration (or VLSI, with up to 100,000 elements).[15] Finally, when in 1970 Intel introduced the microprocessor, it was possible to place all of a computer's central processing functions on a single chip, indicating how in the space of just over twenty years microelectronics technology had revolutionised the nature of electronic componentry and circuit design. Simplifying this rapid process of technological change, commentators normally talk in terms of 'generations'. These stages (and the periods when they first appeared) were:

First Generation: thermionic emission valves (1890s).
Second Generation: transistors (1940s).
Third Generation: integrated circuits (1950s).
Fourth Generation: medium to very-large-scale integration (1960s).

While some have quibbled over the extent to which in principle there was much of a difference between the third and fourth generations, this staging process provides a clear idea of the key phases in the emergence of microelectronics. Above all, it emphasises how firms were obliged to maintain constant vigilance over rivals' activities, given the frequency of both product and process innovations.[16]
While there are many other detailed innovations that featured in this process,[17] it is clear that in overcoming the 'tyranny of numbers' imposed by the thermionic emission valve and providing equipment designers with much smaller, more reliable circuits, the microelectronics industry had become an essential feature of a modern economy. This will become even more apparent when in the next section we consider the British response to these trends. Before discussing this response, however, it is important to understand the dynamic behind the USA's dominance of microelectronics technologies. The central factor in this scenario, of course, was an unrivalled market stimulus, because as late as 1968 the USA accounted for 62% of the global semiconductor market and 80% of world IC sales.[18] More importantly, the initial stimulus to develop IC technology came from the American space and defence agencies, specifically NASA and the USAF. While the initial breakthrough at Bell Lab's had been prompted by the need for more efficient components to be fitted to its parent corporation's telecommunications equipment, it was public sector funding that prompted the decisive process innovations of the late-1950s. For example, the IC's used in the early Apollo space rockets and Minuteman guided missile provided not only the initial development funds, but also the first substantial high-priced market that enabled Texas Instruments and Fairchild to mass-produce these devices from 1961. The key here was how the space and defence agencies were willing to pay pump-priming prices at the beginning of the product life-cycle, allowing an early entrant to harness dynamic economies of scale once production began. And government support continued throughout the 1960s and 1970s, with Texas Instruments alone receiving £96 million to develop IC's . Moreover, with the price of IC's falling throughout the 1960s, telecommunications, computer and industrial equipment designers started to incorporate these devices into their products. Consequently, while in 1962 military and aerospace customers accounted for 100% of US IC sales, by 1980 this had fallen to just 8%.[19]
American microelectronics firms were clearly benefiting substantially from the dual stimulus of a highly supportive public sector and rapidly-expanding demand from civil users of semiconductor devices. As we have already noted, there were also important organisational and institutional factors at play in underpinning the rise of American microelectronics, in particular the emergence of Silicon Valley. This issue we shall also return to in the next section. It is consequently not surprising that as late as 1976 US-owned firms accounted for seven of the world's top ten IC manufacturers,[20] while subsidiaries of these firms were also beginning to dominate many overseas markets, especially in Western Europe. In Britain, for example, by 1969 only 46% of domestic IC consumption originated in British plants. Of even greater concern was the increasing proportion of British IC production emanating from plants established by US corporations like Texas Instruments, Motorola, Intel and National Semiconductor. Indeed, by 1983 these firms were the four largest IC producers in the UK and accounted for over 45% of British consumption. While the British economy benefited significantly from the presence of such powerful firms, especially in terms of jobs, import-substitution and exports, their dominance reflected the failure of indigenous electronics firms to develop sufficiently competitive product ranges in what was widely regarded as a strategic industry.
In this context, it is important to note how the American multinational subsidiaries flourished in the UK market environment, undermining any simplistic claims that the demand stimulus had been weak. Of course, these subsidiaries benefited enormously from the technological, managerial and financial support provided by powerful parent corporations. On the other hand, to understand Britain's failure to establish a viable microelectronics industry, one must assess a wider variety of factors, including the international transfer of technology, public policy directions and the nature of corporate strategic decision-making in a high-risk technology. Before going on to assess how these factors intertwined, however, it is important briefly to discuss some theoretical work on both corporate strategy and firm competencies, providing a framework that one can apply directly to both the British electronics industry in general and Ferranti in particular.