Lompat ke konten Lompat ke sidebar Lompat ke footer

Classes of Computers

There are four basic classes of computers that usually define a computer’s size, power, and purpose:

• Supercomputers. These are the largest, fastest, and most expensive computers
available. Although they technically fall under the mainframe class, their difference
lies in the fact that supercomputers are designed to handle relatively few extremely
complicated tasks in a short amount of time. Some applications of supercomputers lie in calculations involving intensive research and sophisticated applications like theoretical physics, turbulence calculations, weather forecasting, and advanced animated graphics.

• Mainframes. Designed to run as many simultaneous applications as possible, these computers are typically large and fast enough to handle many users when connected to multiple terminals. They are most commonly used by research facilities, large businesses, and the military.

• Minicomputers. Similar to mainframes in their function, minicomputers are smaller in size and power, though they are actually midrange computers. Still serving multiple users, minicomputers are often networked together with other minicomputers in business use.

• Microcomputers. Used almost synonymously with PCs, microcomputers are built around a one-chip microprocessor. One must be careful not to take these classes as an absolute breakdown because there
is some overlap and there are other classes that fall between the four mentioned. For example, a mini-supercomputer is simply one that falls between a supercomputer and a mainframe. In addition, a computer’s power and capabilities are not limited by only its class. Many advances in computer technology during the past few years have made new microcomputers more powerful than minicomputers. Even more surprising is that a new microcomputer can perform as well as some supercomputers from just a few decades past!
The influence of computer technology is a somewhat recent phenomenon due to the reduced cost of computers over the last two decades. However, the philosophical basis for the construction and employment of computing systems has a longer history than 20 years. 
Charles Babbage, a nineteenth-century mathematician at Cambridge University in England, is often cited as a pioneer in the computing field. Babbage designed an ‘‘analytical engine,’’ the capabilities of which would have surprisingly foreshadowed the same basic functions of today’s computers had his design not been limited by the manufacturing capabilities of his time. The analytical engine was designed with considerations for input, storage, mathematical calculation, grouping results, and printing results in a typeface. Other less complex mechanical forms of computers include the slide rule and even the abacus. 
The vast majority of contemporary computers are digital, although some analog computers do exist. These latter types have been relegated almost to a footnote in contemporary computing due to the overwhelming advances made in digital technology. The difference between digital and analog systems lies in the binary code. 
Although digital computers vary in size, shape, price, and capabilities, all digital computers share four common features. First, the circuits used can exist in one of two states, either ‘‘on’’ or ‘‘off.’’ This characteristic yields the basis for binary logic. Second, all share the ability to store data in binary form. Third, all digital computers can receive external input data, perform various functions relating to that data, and provide the user with the
output or result of the performed function. Finally, digital computers can all be operated through the use of instructions organized into sets of separate steps. On a related note, many digital systems possess the ability to perform many different functions at the same time using a technique known as parallel processing.



Emory W. Zimmers, Jr. and Technical Staff
Enterprise Systems Center
Lehigh University
Bethlehem, Pennsylvania