The question then comes: why would you want to? Asus argues there are several scenarios where such a setup would be useful: coders wanting to see the code on one screen and the end result on another; students might keep reference information on the secondary screen while they work on a dissertation; video editors could put the timeline on one screen and the output on the other.
Just as the best camera is the one you have with you, the best computer most of us have to hand is our phone. For the past decade, laptop makers have tried to make their products more tablet-like; the next decade could see manufacturers making laptops more like phones.
Meanwhile, some manufacturers have tried to make smartphones more PC-like. This marrying of smartphone and desktop is probably the way forward, at least for the next few years.
As the hardware evolves over the coming decade, so of course must the software powering it. However, while AI is likely to play a bigger part over the next few years, few think it will be the go-to mechanism for controlling a PC. So users may start an interaction or transaction by voice and switch seamlessly to and from typed commands.
Expect security to be at the forefront of future OS experiences too. As hacks and data breaches continue to proliferate, both Apple and Microsoft are moving to soup up their onboard security. The processor will remain the beating heart of the PC, and with a resurgent AMD pushing Intel, we can expect accelerated developments in this component too.
In the near-term, Intel will deliver its first 10nm desktop CPUs after seven years of 14nm products based on the Skylake architecture. To see what we mean, simply turn to our workstation Labs on p The bad news for Intel, and excellent news for everyone else, is that AMD is set to release its Zen 3 core later this year. Based on its 7nm Zen 2 cores, these chips promise faster clocks, better power efficiency and yet more cores.
For the first time, AMD looks to have a laptop package to rival Intel. So we have little doubt that AMD will be a much bigger rival to Intel in the x86 space, as traditionally favoured by Windows. They write code that is more elegant and, more importantly, leaner, so that it executes faster. In the early days, when the hardware was relatively primitive, craftsmanship really mattered. When Bill Gates was a lad, for example, he wrote a Basic interpreter for one of the earliest microcomputers, the TRS Because the machine had only a tiny read-only memory, Gates had to fit it into just 16 kilobytes.
There are thousands of stories like this from the early days of computing. The construction of sprawling software ecosystems such as operating systems and commercial applications required large teams of developers; these then spawned associated bureaucracies of project managers and executives. And in the process, software became bloated and often inefficient. Conscientious programmers were often infuriated by this.
They become lazier, because the hardware is fast they do not try to learn algorithms nor to optimise their code… this is crazy! Signals pass down the wires at the speed of light in metal, approximately half the speed of light in vacuum. The transistorized switches that perform the information processing in a conventional computer are like empty hoses: when they switch, electrons have to move from one side of the transistor to the other.
The 'clock rate' of a computer is then limited by the maximum length that signals have to travel divided by the speed of light in the wires and by the size of transistors divided by the speed of electrons in silicon.
In current computers, these numbers are on the order of trillionths of a second, considerably shorter than the actual clock times of billionths of a second. The computer can be made faster by the simple expedient of decreasing its size. Better techniques for miniaturization have been for many years, and still are, the most important approach to speeding up computers.
Wires and transistors both possess capacitance, or C--which measures their capacity to store electrons--and resistance, R--which measures the extent to which they resist the flow of current. The product of resistance and capacitance, RC, gives the characteristic time scale over which charge flows on and off a device.
When the components of a computer gets smaller, R goes up and C goes down, so that making sure that every piece of a computer has the time to do what it needs to do is a tricky balancing act.
Technologies for performing this balancing act without crashing are the focus of much present research. So to make computers faster, their components must become smaller. At current rates of miniaturization, the behavior of computer components will hit the atomic scale in a few decades. At the atomic scale, the speed at which information can be processed is limited by Heisenberg's uncertainty principle.
Recently researchers working on 'quantum computers' have constructed simple logical devices that store and process information on individual photons and atoms.
Atoms can be 'switched' from one electronic state to another in about 10 15 second. Whether such devices can be strung together to make computers remains to be seen, however. IBM Fellow Rolf Landauer notes that extrapolating current technology to its 'ultimate' limits is a dangerous game: many proposed 'ultimate' limits have already been passed.
The best strategy for finding the ultimate limits on computer speed is to wait and see what happens. Summers is a professor of electronic engineering technology at Weber State University in Ogden, Utah. His answer focuses more closely on the current state of computer technology: "Physical barriers tend to place a limit on how much faster computer-processing engines can process data using conventional technology. But manufacturers of integrated-circuit chips are exploring some new, more innovative methods that hold a great deal of promise.
Smaller traces mean that as many as million transistors can now be fabricated on a single silicon chip. Increasing transistor densities allow for more and more functions to be integrated onto a single chip.
A one-foot-length of wire produces approximately one nanosecond billionth of a second of time delay.
0コメント