Archive for September, 2009

Intel accelerates 32nm processor plans

Posted in Management, Processors on September 30, 2009 by cipri.muntean

Intel has announced a major update to its 32nm next-generation processor plans, revealing substantial new details on its chip roadmap and outlining a $7bn investment in new plant.

The first 32nm processor, code-named Westmere, will be in production by the fourth quarter of 2009. It will arrive in a dual-core, four-thread format suitable for desktops and notebooks, the company said in a conference call on Tuesday.

The design initially will also include a 45nm integrated graphics and memory controller as part of a multichip package, with this component moving to 32nm — and possibly fully integrated — in 2010. The same year will see the arrival of Gulftown, a six-core, 12-thread chip for desktops, as well as the first Westmere-based Xeon server chips.

Intel announced that as well as moving integrated graphics and memory into the main processor, it was moving all remaining chipset functions into a single chip, the Intel 5 series. With the Intel 5, motherboard makers could build PCs with all the logic components in just two chips.

“We have excellent health on Westmere,” an Intel spokesperson said. “We were thrilled with the first silicon, and were able to boot and run applications on the very first wafers. We have enough confidence that we’re accelerating the 32nm ramp in the mainstream.”

The spokesperson also said that a version of the chip would be demonstrated later on Tuesday in San Francisco.

Intel said the 32nm process was the first to use immersion lithography, a new technique where some production takes place in water, with design patterns shrunk by refraction.

Westmere is substantially the same architecture as the existing 45nm Nehalem chip, shrunk to the new 32nm process. Seven new instructions have been added, the company said, to support accelerated encryption and decryption suitable for communication and hard disks. The next major update, Sandy Bridge, will have a new architecture that will span the next process transition to 22nm.

In support of these moves, the company said it was spending $7bn (£4.8bn) over two years across four chip-production sites in the US.

Intel 32nm

Intel’s first 32nm processor will have a 45nm graphics and memory
controller on a separate chip but in the same package.

IBM’s 35 atoms and the rise of nanotech

Posted in Management, Processors on September 30, 2009 by cipri.muntean

When IBM researcher Don Eigler picked up and moved the first individual atom 20 years ago today, he paved the way for what arguably was the smallest publicity stunt ever: IBM’s logo made from a precise arrangement of 35 Xenon atoms.

But moving tiny atoms had big consequences by making the idea of assembling devices atom by atom very real. And the company has built on that nanotechnology foundation, storing information on specific gold atoms, collecting carbon monoxide molecules into computer logic circuits, and pursuing a vision for vastly more compact computing technology.

Despite the progress, Eigler is cautious about when or even if his ideas for computing will bear fruit.

“We did the introduction, and we’re in chapter 1,” Eigler said. “This is way off in the future, if it ever comes about. I cannot conceive, under the best circumstances, this is going to happen in 10 or 15 years.”

IBM logo atoms

Don Eigler moved the first individual atom 20 years ago, and shortly afterwards wrote ‘IBM’ with 35 Xenon atoms

Eigler, now an IBM fellow, said he was “boggled” that day he moved his first atom with an IBM device called a scanning tunnelling microscope. He programmed the system to make the move, then held his breath while his screen went blank during the actual operation.

“You can’t see it while you actually move it. Then you see the picture come in and say, ‘Yes, it’s there’,” Eigler said. He moved the atom back and forth three times to make sure it really worked: “For us, that’s [a] sort of sacred thing. The key thing and most important thing about science is reproducibility. If you can’t reproduce your own result, you might as well forget it. It’s as if you’d never done it.”

Shortly after that, in November 1989, Eigler arranged the 35 atoms to spell IBM. There was, of course, publicity in it for the company, but Eigler had no complaints. For one thing, it demonstrated that IBM really could control atoms with atomic-scale precision and that its work was not just a fluke. For another, Eigler was grateful that IBM let him pursue his work.

“It was more than a publicity stunt. Emotionally, for me, it was much more important. This is going to sound hokey, but it’s the truth. IBM picked me up off the scrap heap of science and gave me every opportunity a scientist could hope for to be successful,” Eigler said. “As far as I was concerned, it was payback time.”

No mass manufacturing
Eigler and colleagues have been working on the technology since but, so far, the benefits have been indirect. That is because moving and studying atoms with a scanning tunnelling microscope and its offshoot, the atomic-force microscope, is a far cry from assembling computing devices that operate at much larger scales.

“Being able to put atoms together with atomic-scale precision at a level that allows you to deliver a marketable product is something that is largely hope and vision for our future,” Eigler said. “We are not there yet.”

There are other directions of nanotechnology research: Eigler gave graphene and topological insulators as possibilities. Eigler, however, remains excited to pursue his own long-term vision for computers that process information without today’s reliance on the movement of electrons.

Specifically, he is interested in using the quantum mechanical property called spin for computing. The conventional conception for this general idea, called spintronics, uses spin to control the flow of electric current, but Eigler wants to use spin alone.

“My goal is to do everything we need to do for computation — logic, storage, information transport — but without moving electrons around at all,” Eigler said.

One advantage of the approach is that it avoids electrical current that produces the waste heat that is a major limiting factor in today’s computers. Another is that it can enable three-dimensional computing designs much more densely packed with processing power than today’s two-dimensional circuitry etched onto silicon wafers.

Spin engineering
The spin of one atom can affect that of its neighbour. The hard part is arranging atoms in order to harness that effect and perform useful computing operations.

“We have to learn how to engineer things so they work the way we want them to work. If you have two atoms, each has spin, and those spins are coupled together in usually two, three, or even four different ways,” he said. “You have to place them in the appropriate relationship with one another.”

One milestone towards this goal was work by Gerhard Meyer of IBM’s Zurich Research Laboratory and others to store data in the form of electrical charges on individual atoms of gold, Eigler said.

In another, IBM’s Christopher Lutz found he could trigger a ‘molecule cascade’, in which a series of carbon monoxide molecules could transmit information. The metastable molecules could store energy, then release it from one neighbour to another, similar to a chain of balanced dominoes falling.

Lutz then found a way to arrange those molecules into basic logical processing units of computers, ‘and gates’ and ‘or gates’, which are foundations of today’s computers. It did not use spin, but it is a step in that direction, Eigler said.

Building blocks
One possible intermediate step between moving single atoms and mass manufacturing is what Eigler calls nano plug-ins.

If physicists and engineers could figure out how to construct individual logic gates out of a complicated molecule, IBM chemists might be able to work out a way to synthesise such units in quantity. Next would come the assembly process of snapping these units together appropriately.

“That strategy for building things that work on a very small scale may well be what we see in the future,” Eigler said.

And it may arrive, even if his spin-based computation does not. “It may be [used with] regular conventional electronics, [or] with carbon nanotubes or graphene,” he said. This brings him to the point about why IBM Research invests in such distantly useful technologies.

“The knowledge we’re generating in the process of getting there,” Eigler said, “is likely to feed into the industry much sooner than the actual outcome — if we ever get to that outcome”.

Don Eigler

PCA: Microsoft Program Compatibility Assistant

Posted in Windows 7 on September 30, 2009 by cipri.muntean

For this blog, I thought that I would spend a little time looking at some of the compatibility features included in Windows 7. I wasn’t surprised to learn that Microsoft views application (and driver) compatibility as a vital element to the migration effort to Windows 7. Microsoft internally has placed a number on application compatibility somewhere near $60 Billion over the next five years. Meaning that just getting and keeping things working on 7 is worth $1,000,000,000.00 a month to Microsoft. That should focus the mind, eh?

Windows 7 delivers a number of compatibility features beyond the compatibility layers introduced in Windows XP (and enhanced in Service Pack 2). One of the more interesting compatibility functions delivered by Microsoft is the Program Compatibility (PCA) Wizard which analyses the following;

– Application installation routines (including un-installation)
– Application Updates and Patches
– Application Re-Installs (but not MSI driven Repairs)
– Application Loads
– Session Startup events
– Verification of post-install application events

This compatibility tool is “baked” into Windows 7 but is NOT available on any of the Microsoft server products – this includes Longhorn.

Surely, getting applications working on the server platform would be just as important as the desktop. Maybe Microsoft’s answer will be the hyper-visor virtualization technology. But, as we found out this week from WinHEC 2007, we are going to have to wait a while – for OS level virtualization to be integrated into the server OS will not be released until the end of this year and I feel that we will have to wait another 9-12 months for a service pack before this technology is ready for production deployment.

Getting back to the Microsoft Program Compatibility Assistant, there are 3 main scenarios where the PCA is used;

– Detecting application installation failures
– Detecting program failures under UAC
– Assessing startup failures

Windows 7 Bootscreen

The PCA monitors application installation actions through a heuristic or “recipe” approach that will display a dialog box if a known application compatibility problem exists with that application and where a “compatibility layer” fix is available. These layer fixes effectively deliver Windows XP SP2 compatibility. The thinking here is that if the application worked under XPSP2, then it will also work with the 7 “XPSP2” compatibility layer applied. I will discuss compatibility layers in-depth in a later blog as this is a huge area (so big, that there should be a Oreilly book on the topic)

When it comes handling application errors with the User Account Control feature, the PCA will analyze the detected compatibility issue and automatically raises the process’ security profile (using ElevateCreateProcess) so that next time the application is loaded, the application just works. Microsoft is so confident about this process, that there are no configuration options for the PCA and UAC.

When it comes to application startup issues, the Program Compatibility Application (PCA), there are two main approaches; limited application compatibility issues and sever compatibility limitations. When an application is loaded in the “startup” session three events are likely to be triggered by the PCA. The PCA will display a dialog box relating to the application compatibility issue and deliver one or all of the messages.

· Pointing the user to an update from the software vendor for that program.
· Pointing the user to an Software vendor website for more information.
· Pointing the user to a Microsoft Knowledge base article for more information.

I think more automated fixing could be done here – Microsoft has intimate knowledge of what works and what does not? More on this question, in my next entry.