Introduction of the Unix operating system

Unix’s development is no doubt one of the most important milestones in this history of computing. The operating system not only introduced some of today’s most elementary concepts in information technology, such as the hierarchically structured file system, it also has served as the basis for numerous other systems, like Apple’s macOS and iOS, or the open source Linux. In turn, this has led to the emergence of numerous derivatives, like Ubuntu, Debian, or mobile Android. But how exactly did Unix become one of the most influential computer management applications, and why was its development team able to exclusively record ideas initially on blackboards and notepads?

Multics joint project laid the foundations

In 1965, a working group presented their idea for a new operating system at the Joint Computer Conference. The group consisted of employees from the Massachusetts Institute of Technology (MIT), General Electric, and Bell Laboratories (Bell Labs) or AT&T (part of Nokia’s research and development department since 2016). They named the operating system Multiplexed Information and Computing Service, or Multics for short. They pursued completely new approaches, focusing on time-sharing in particular. Multics was among the first systems to allow multiple users to work simultaneously on one computer by sharing the underlying processor’s computing time.

The Multics working group needed a computer with specific requirements to get their project off the ground: on the one hand, it had to have clearly formatted instructions to be able to use the higher programming language PL/I from IBM intended for development. On the other hand, it had to support the planned multi-user operation and work asynchronously to minimize performance losses in memory management. For this reason, the GE-635 and later the GE-645 from General Electric were selected. The development was carried out on the multi-user system CTSS, which had been developed by MIT back in the 1960s and was already up and running. Delays in development of the PL/I compiler, financial bottlenecks, internal differences, and growing external pressure eventually led Bell Labs to withdraw from the project in 1969.

Multics becomes Unix

Multics was developed further at MIT and later distributed commercially on Honeywell 6180 machines by Honeywell International Inc., after its acquisition by General Electric (until 1986). However, the computer scientist Ken Thompson, who was an employee at Bell Labs at the time, could not let go of a multi-user system: together with Dennis Ritchie and a small team at AT&T, he began planning his own system, based on Multics principles. But the search for a suitable computer initially proved to be fruitless – and as Bell Labs resisted the purchase of a suitable copy, the developers began recording their notes and progress for a planned file system on notepaper and blackboards.

Finally, a used PDP-7 minicomputer from Digital Equipment Corporation (DEC) was acquired for the planned project. This computer system, which was “only” the size of a wall unit, ran with GECOS (General Electric Comprehensive Operating System), which served as a development platform from then on. Valuable software tools like a command line (SH) and editor (ED) and the already existing file system in paper form were quickly developed – initially still in an assembly language (hardware-oriented, but simplified for humans). Since the new operating system only allowed two users to work on a process at the same time (unlike Multics), the team named it Unics based on the template. Due to limitations for file name lengths in GECOS, the final name Unix was decided upon.

First B, then C: Unix gets its own higher programming language

After the Bell Lab team had written Unix and some other elementary programs, it was time to replace the assembly language used for this purpose with a less complex variation. However, the plan to develop the pre-existing IBM language Fortran was rejected after a short time. Instead, work began on their own language and was strongly oriented towards PL/I – the Multics language – and the BCPL (basic combined programming language) developed at MIT. Subsequently, Ritchie and his colleagues rewrote some of the system tools in this language until they eventually received a new PDP-11 computer in 1970, and were once again forced to rethink their technique. This was because the new system architecture was not word oriented like the PDP-7 computer and the programming language B, but was instead byte oriented.

In the next two years, Bell Labs developed the successor C, whose syntax and other features can be found in numerous modern programming languages like c++, JavaScript, PHP, or Perl. When the language was mature enough in 1973, the development team started rewriting the complete Unix kernel in C. The result was published by the Unix team in the mid-1970s. Since AT&T was not allowed to sell any software at the time, being a state controlled telecommunications industry, Unix (version 6) which was a multi-user system that also allowed several processes simultaneously, was made available to all interested universities free of charge – including a C compiler, which made the system usable on almost all platforms.

Hardware friendly, and open source: Unix conquers the developer scene

With the release of Unix software for educational institutions, the success of the new operating system quickly became more and more apparent, initially as a toy among programming circles. Common work processes on the IBM mainframes and PDP machines during that time continued to run on native systems like RSX-11, RT-11 or IST. For developers, though, the value of the source code provided by the kernel and the individual applications was not just a learning effect: the low demands Unix made on hardware and its high usability encouraged experimentation and further development, which was particularly well received by the University of California, Berkeley (Thomson’s former home university) – although the fact that he took up a guest professorship in its newly created computer science faculty in 1976 probably played a significant role.

Bill Joy and Chuck Haley, two graduate students at the time, improved the Pascal system developed by Thompson and programmed a completely new text editor with ex – the predecessor of vi, which can still be found in unixoid system standard installations today. In 1977, under Joy’s direction, a modified variant of Unix appeared, which contained the improvements and further developments made so far. The Berkeley Software Distribution (BSD), which later integrated the TCP/IP network protocol into the Unix universe, and was able to meet the requirements of a free operating system for the first time (thanks to its own BSD license), and is considered to be one of the most important Unix modifications to date.

The 1980s: commercialization and the Unix wars

In the following years more and more modifications were developed, including ones that focus on other aspects, like finance. For example, Microsoft acquired a Unix V7 license in 1979 to develop ports for Intel and Motorola processors, among other things. In the following year, they released Xenix, which was originally planned as a standard operating system for PCs but ended up placing hardware demands that were too high. Microsoft finally placed further developments in the hands of the software manufacturer SCO (Santa Cruz Operation) to concentrate on OS/2 and further development of MS-DOS.

Bill Joy also jumped on the bandwagon in 1982 with his newly founded company Sun Microsystems, using the proprietary BSD-based system SunOS (predecessor of Solaris), which was specifically designed to use on servers and workstations.

However, the real battle for Unix fans was fought between AT&T, which by now had received commercial distribution permission, and Berkeley University, which was able to highlight valuable innovations, thanks to their large number of supporting programmers. AT&T first tried to conquer the market with System III (1981) and with the new optimized version of System V (1983), both of which were based on Unix V7. The University of Berkeley then simultaneously released 4.3BSD, for which 1,000 licenses were issued within 18 months. This made it much more popular than the paid System V, which lacked the file fast system (FFS) and the network capability (thanks to integrated TCP/IP) of Berkeley’s variant.

With System V’s fourth release (1988), AT&T implemented these two and many other BSD features, as well as for Xenix and SunOF, which led to many users switching to the commercial option.

Thanks, Penguin: Unix becomes a server solution

While different Unix systems initially competed with each other for sales and loyalty, Apple and Microsoft began their rivalry in the personal computer sector and later in the server field. While Microsoft won the race when it comes to home PCs, a system based on Unix concepts suddenly appeared on the scene in 1991 with Linux, which in the following years won over the server environment. Thanks to the freely licensed kernel and freely available GNU software, the developer Linus Torvalds had fulfilled the desire for a competitive open source operating system and won over the market at the time.

Until today, numerous Unix Linux derivations like Debian, CentOS, Red Hat, or Ubuntu are used as system software for all kinds of servers. Ubuntu in particular is becoming more and more popular for home PCs. Linux, which we have an article on is by far not the only important Unix successor in today’s software world: since Mac OS X 10.0 or Mac OS X Server 1.0, the Apple operating system uses Darwin, a free BSD variant, as a substructure. Berkeley Unix itself is even represented several times with numerous other free derivatives like Free BSD, Open BSD, or Net BSD.

With iOS (same system base as macOS) and Android (based on Linux kernel), the two most widely used operating systems for mobile devices also belong to the Unix family.

What is Unix? The most important milestone features of the system

When it was introduced, many of Unix’s distinguishing features were absolute novelties that were not just intended to influence the development of unixoid systems and distributions, but were also taken up by competitors Apple and Microsoft in their operating systems. Especially when you take the following characteristics into consideration, Richie, Thompson, and their colleagues involved with Unix were pioneers of modern operating systems at that time:

Hierarchical, universal file system

An elementary part of Unix right from the beginning was the hierarchically-organized file system, which allows the user to structure files into folders. Any number of subdirectories can be assigned to the root directory, which is marked with a “/”. Following the basic principle of “Everything is a file,” Unix also maps drives, hard disks, terminals, or other computers as device files in the file system. Some derivatives, including Linux, even mark processes and their properties as files in the procfs virtual file system.

Multitasking

Another decisive factor in Unix’s success was the ability to execute several processes or programs simultaneously without them interfering with each other. The operating system was based on the method of pre-emptive multitasking right from the start. With this method, the scheduler (which is part of the operating system kernel) manages the individual processes through a priority system. It was only much later during the 1990s that Apple and Microsoft began implementing comparable process management solutions.

Multi-user system

Even Multics’ main goal was a system that would allow several users to work simultaneously. To do this, an owner is assigned to each program and process. Even if Unix was initially limited to two users, this feature was part of the system software portfolio right from the start. The advantage of this kind of multi-user system was not just the opportunity to access the performance of a single processor at the same time, but also the associated rights management. Administrators can now define access rights and available resources for different users. Initially, however, it was also a prerequisite that the hardware of each respective computer was involved.

Network capability

With 4.2BSD, Berkeley’s Unix became one of the first operating systems to integrate the internet protocol stack in 1983, providing the foundation for the internet and simple network configuration, and the ability to act as a client or server. In the late 1980s, the fourth version of System V (already mentioned) was also a variety of the commercial AT&T system, which adds the kernel to the legendary protocol family. Windows should only support TCP/IP with 3.11 (1993) and an appropriate extension.

Platform independence

While other operating systems and their applications were still tailored to a specific processor type at the time Unix was created, the Bell Labs team pursued the approach of a portable system right from the start. Although the first language was an assembly language, the project created a new, higher programming language as soon as the basic structure of the systems software was created. This language was the predecessor of the historical C language. Although the components written in C were still strongly bound to PDP machine architecture, which Ritchie and his colleagues used as a basis for their work, despite the included compiler. Lately, with the strongly revised Unix V7 version (1979), however, the operating system rightly earned its reputation as a portable system.

The Unix toolbox principle and the shell

Unix systems combine a multitude of useful tools and commands, which are usually only designed for a few special tasks. For example, Linux uses GNU tools. For general problem solving, the principle is to find answers in a combination of standard tools instead of developing specific new programming. The most important tool has always been the shell (SH), a text-oriented command interpreter that provides extensive programming options. This classic user interface can also be used without a graphic user interface, even if that kind of interface naturally increases user comfort. However, the shell does offer some significant advantages for experienced users:

  • Simplified operation thanks to intelligent auto-completion
  • Copy and paste functions
  • Interactive (direct access) and non-interactive (execution of scripts) states are usable
  • Higher flexibility, since the individual applications (tools, commands) can be combined almost freely
  • Standardized and stable user interface, which is not always guaranteed with a GUI
  • Script work paths are automatically documented
  • Quick and easy implementation of applications

Conclusion: if you want to understand how operating systems work, take a look at Unix

The rise of Microsoft and Apple, directly linked to their creators Bill Gates and Steve Jobs, is undoubtedly unparalleled. However, the foundation of these two giant success stories was laid by the pioneering work of Dennis Ritchie, Ken Thompson, and the rest of the Unix team between 1969 and 1974. Unix does not just produce its own derivatives, but also influences other operating systems with concepts like the hierarchically structured file system, the powerful shell, or high portability. To implement the latter, the most influential programming language in computer history, C, was developed almost in passing.

To be aware of the possibilities of language and general operating system functionality, there is no better illustrative object than a Unix system. You do not even have to use one of the classic variants: Linux distributions like Gentoo or Ubuntu have adapted to modern demands without giving up basic features like maximum control over the system. You are somewhat more limited in your possibilities with the beginner-friendly macOS, which masters the balancing act between the powerful Unix base and a well-designed graphic user interface with flying colors.

We use cookies on our website to provide you with the best possible user experience. By continuing to use our website or services, you agree to their use. More Information.