Hyper Threading Technology is a hardware innovation that allows more than one thread to run on each core. More threads means more work can be done in parallel.
When Hyper Threading Technology is active, the CPU exposes two execution contexts per physical core. This means that one physical core now works like two “logical cores” that can handle different software threads.
Two logical cores can work through tasks more efficiently than a traditional single-threaded core. By taking advantage of idle time when the core would formerly be waiting for other tasks to complete, Hyper-Threading Technology improves CPU throughput.
Enterprise, e-Business and gaming applications continue to put higher demands on processors. To improve performance in the past, threading was enabled in the software by splitting instructions into multiple streams so that multiple processors could act upon them. Hyper Threading Technology (HT Technology) provides thread-level parallelism on each processor, resulting in more efficient use of processor resources, higher processing throughput, and improved performance on today’s multi-threaded software.
Hyper-Threading Technology is a ground- breaking innovation that significantly improves processor performance. Faster clock speeds are an important way to deliver more computing power. But clock speed is only half the story. The other route to higher performance is to accomplish more work on each clock cycle, and that’s where Hyper-Threading Technology comes in. It fools the operating system into thinking it’s hooked up to two processors, allowing two threads to be run in parallel, both on separate ‘logical’ processors within the same physical processor. The OS sees double through a mix of shared, replicated and partitioned chip resources, such as registers, maths units and cache memory.
Hyper-Threading technology is a form of simultaneous multi-threading technology (SMT), where multiple threads of software applications can be run simultaneously on one processor. This is achieved by duplicating the architectural state on each processor, while sharing one set of processor execution resources. The architectural state tracks the flow of a program or thread, and the execution resources are the units on the processor that do the work: add, multiply, load, etc.
Need for the technology
Improving processor utilization has been an industry goal for years. Processor speeds have advanced until a typical processor today can run at frequencies over 2 GHz, but much of the rest of the system are not capable of running at that speed. To enable performance improvements, memory caches have been integrated into the processor to minimize the long delays that can result from accessing main memory.
Xeon processors, for example, now include three cache levels on the die. Large server-based applications tend to be memory intensive due to the difficulty of predicting access patterns. The working data sets are also quite large. These two things can create bottlenecks, regardless of memory prefetching techniques. Latency due to these bottlenecks only gets worse when pointer-intensive applications are executed. Any mistake in prediction can force a pipeline to be cleared, incurring a delay to refill this data. It is this latency that drives processor utilization down. Despite improvements in application development and parallel processing implementations, reaching higher utilization rates remained an unmet goal.
Principle of Hyper-Threading Technology
HT Technology allows a single physical processor to function as two virtual or logical processors. There’s still just one physical processor in your PC — but the processor can execute two threads simultaneously. A physical processor can be thought of as the chip itself, whereas a logical processor is what the computer sees.
Hyper-Threading Technology enables thread-level parallelism (TLP) duplicating the architectural state on each processor while sharing one set of processor execution resources. When a thread is scheduled and dispatched to a logical processor, LP0, the Hyper-Threading technology utilizes the necessary processor resources to execute the thread.
When a second thread is scheduled and dispatched on the second logical processor, LP1, resources are replicated, divided, or shared as necessary in order to execute the second thread. Each processor makes selections at points in the pipeline to control and process the threads. As each thread finishes, the operating system idles the unused processor, freeing resources for the running processor.
How HT Technology Works
Looking inside the processor we find that the core contains subsystems to enhance performance. These subsystems control program execution, perform instruction fetching, integrate the on-die cache, and handle all the instruction reordering and retiring. As threads are passed to the processor, the instruction fetching and reordering systems allocate resources to the incoming threads. The instructions in these threads are then sent to the execution system in an alternating fashion from the level 1 cache.
This continues until one of the logical processors no longer needs information from the level 1 cache and then the entire cache resource is allocated to the other logical processor. The execution core processes instructions in an order determined by dependencies in the data and availability.
The processor is allowed to execute instructions out of order, that is, in a different order than the order in which they arrived. This means instructions can be executed in the order that will yield the best overall performance. The instruction reordering and retiring system eventually completes all the out-of-sequence instructions that were executed, and then retires them in the original program order. Upon completion, instructions are retired much the way they were originally sent, with the logical processor taking turns.
CPU Utilization by HT Enabled Processors
Each instruction that is sent to the processor is called a thread. Modern processors can only handle one instruction from one program at any given point in time. Even when we put a regular processor under 100% load, we’re never fully utilizing 100% of the execution units. With a Hyper-Threading enabled processor those spare execution units can used towards computing other things now.
As you can see from the slides above the single Superscalar processor is busy computing information however about half the processor remains unused. In the Multiprocessing portion of the demonstration, we see a dual CPU system working on two separate threads, however again about 50% of both CPU’s remain unutilized. In the last Hyper-Threading enabled processor, both threads are simultaneously being computed, and the CPU’s efficiency has increased from around 50% to over 90%! The last example is of dual Hyper-Threading enabled processors which can work on four independent threads at the same time. Again, CPU efficiency is around 90% (and in this case there would be four logical processors, and two physical processors).
Applications of Hyper Threading Technology
Due to the faster response time and transaction resulting from Hyper-Threading, many services including Internet, search engine, IT security, multimedia streaming video, database and e-mail/files/printer server have developed application softwares for Dual Processor (DP) in the server system. As for the applications supporting to multi-processors, such as Clients Relationship Management, Multimedia Services, Website Administration, Enterprises’ Database, Business Wisdom, Co-operational Function and Supply Chain Management, etc., they can also be run on the HT supported processors.
• Editing Digital video and audio
While editing digital pictures or home movies, it can manage more filters, transitions, special effects, and media types at once, making the experience easier and more enjoyable.
Hyper-threading Technology offers many benefits to e-Business and the enterprise such as:
- Enables more user support improving business productivity.
- Provides faster response times for Internet and e-Business applications enhancing customer experience.
- Increases number of transactions that can be processed.
- Handles larger workloads.
Benefits of Hyper Threading Technology
- Hyper-Threading Technology is available on laptop, desktop, server, and workstation systems.
- Increases the number of transactions that can be processed.
- Handles larger workloads
- Hyper-Threading Technology helps your PC work more efficiently by maximizing processor resources and enabling a single processor to run two separate threads of software simultaneously — which can deliver performance increases and improve user productivity.
- Provides faster response times for Internet and e-Business applications, enhancing customer experiences.
- Multithreading also improves power efficiency by maintaining a higher level of processor utilization. This is important because in a system designed for full-power thermal and power delivery, the busier the processor, the greater the efficiency.
- Hyper-threading is not multi-processing on the desktop. The biggest difference is that true multi-processor systems have dedicated cache, integer units, and floating point processing, whereas Hyperthreading-enabled processors have to fight for resources internally.
- Hyper-Threading Technology can actually produce a performance loss if the load at the logical processors is not balanced.
- While the chip can execute and process multiple threads simultaneously, if both software threads want to access the same part of the CPU, they both will have to share what’s available to them. True SMP systems do not have this issue.
The Future of Hyper Threading Technology
Current Pentium 4 based MPUs use Hyper-Threading, but the next-generation cores, Merom, Conroe and Woodcrest will not. While some have alleged that this is because Hyper-Threading is somehow energy inefficient, this is not the case.
Hyper-Threading is a particular form of multithreading and multithreading is definitely on Intel roadmaps for the generation after Merom/Conroe/Woodcrest. Quite a few other low power chips use multithreading, including the PPE from the Cell processor, the CPUs in the Playstation3 and Sun’s Niagara.
The real question is not whether Hyper-Threading will return, because it will, but rather how it will work. Currently, Hyper-Threading is identical to Simultaneous Multi-Threading, but future variants may be different.
HT brings about a revolution in the field of processing speed. However, it is not as effective as dual processors, although it does increase the performance by about 30%. Hyper-threading does not provide an instant performance boost. Only a select number of applications will show any performance gain, and an even smaller number will show noticeable gains. Multiple tasks can be run simultaneously. What this all means is that Hyper-threaded enabled Pentium 4 processors can receive and process multiple software threads, simultaneously. This can lead to greater overall performance, especially when doing lots of applications at once. The two applications running parallely would, normally, be slow as both would be fighting for CPU power. Serious multi-taskers will see the biggest gains with Hyper-threading, although standard desktop users should see some small benefits as well.