Thursday, August 21, 2008

Solar advocates beef up solar thermal efforts


A National Renewal Energy Laboratory "irradiance" map, which shows the available sunlight around the world, suggests that the United States is the Saudi Arabia of solar energy, with twice the irradiance of Europe. It's a picture worth a thousand words to solar activists looking to make a convincing case for the emerging energy source.

Harnessing global sun power "is just an engineering effort," said Werner Koldehoff of the German Solar Industry Association.

Koldehoff's group and other European solar enthusiasts have come to America to make the case for solar thermal technology, an alternative to photovoltaics (PV) that attempts to harness the efficient phase change from water to steam. For cost and technology reasons, solar thermal is emerging as the preferred alternative energy technology in the race to replace fossil fuels with sustainable energy sources, many experts agree.

Solar benefits
Along with cost per watt—eventually cost per kilowatt—solar thermal's biggest selling point is its ability to store energy and deliver electricity to consumers during periods of peak power demand. Experts at a recent solar energy conference said "concentrating" solar thermal power could allow utilities and other emerging operators to store steam energy for up to six hours. Super-heated steam is used to drive turbines that generate electricity.

For residential, business and other lower-temperature applications, solar thermal could be used to heat water as well as for space heating. Koldehoff said the approach could also be harnessed for an emerging application he calls "solar-assisted cooling." Air conditioning requires roughly 4.5x the energy as heating. The largest amount of solar energy is available in the late afternoon during peak demand for air conditioning. Hence, advocates say, solar thermal power offers the least-expensive source of electricity when demand is highest.

Koldehoff said pilot solar-cooling projects are already under way in sunny Spain, and the technique could also be used for applications such as operating power-hungry desalinization plants. "The real future application in the next five years is [solar] cooling, and we need it badly, because we can't afford [the soaring cost of cooling] anymore," he said.

Concentrating, or sun-tracking, photovoltaics and solar thermal power collectors such as parabolic troughs follow the sun across the sky at one of more axis points, focusing sunlight by as much as 1,500-fold in high-end systems to improve the efficiency of solar panels.

Power concentration
Experts note that solar thermal's so-called "dispatchability" means stored power could reliably generate electricity that could then be sold to utilities during load peaks on electric grids, usually after 5 p.m. This "load-shifting" approach makes solar thermal power far more valuable for plant operators than, say, photovoltaic energy that must be used immediately.

"Thermal energy storage is the killer app of concentrating solar power technology," Andrew McMahan, VP for technology and projects at SkyFuel, told a packed solar technology conference last month held in conjunction with Semicon West. Solar thermal collector technologies like parabolic troughs have a good track record after more than 20 years of use, McMahan added. "The technology has steadily improved and is being demanded by utilities" when negotiating power supply agreements with solar operators.

Industry analysts like Jim Hines, Gartner Inc. research director for semiconductors and solar, agree that solar thermal appears best suited to large power projects aimed at supplying electricity to utilities. Other technologies, such as flat-plate photovoltaics and concentrating PV systems, work best in residential and commercial applications, Hines said. Photovoltaic "cost projections are encouraging, but future demand will depend on external factors" like solar thermal becoming the technology of choice.

Among the large solar thermal projects discussed at the solar confab were several "power tower" projects that use concentrating solar collectors to refocus sunlight on "solar boilers." For example, solar developer Brightsource Energy is building a 400MW solar thermal complex in California's Mojave Desert, a prime location for a number of planned solar thermal projects. Along with other industry executives, Brightsource CEO John Woolard noted that the primary challenge for solar thermal is efficiently transmitting power from remote desert locations to cities.

"Solar has become an important part of our resource mix," said Hal LaFlash, director of resource planning for Pacific Gas & Electric. "The big challenge is transmission [because] the highest resource potential is remote from population centers."

'Fossil-assisted solar'
Still, experts agreed that for large alternative-energy projects, solar thermal appears to be the best approach. According to estimates compiled by the Prometheus Institute for Sustainable Development, solar thermal power- generating costs could drop from about $4.25/W in 2008 to $2.5/W by 2020.

Solar thermal "is an extremely cost-effective technology compared with other [solar] technologies," though costs may not drop as fast as competing technologies like flat-plate or concentrating photovoltaics, said Travis Bradford, founder of the Prometheus Institute.

Nevertheless, solar thermal's "load-shifting" capability allows producers to store electricity, and then sell it during periods of peak demand. The predictability of solar thermal power along with technology innovations could help drive down start-up costs as the solar power infrastructure is built. Proponents add that the amount of energy needed to build and deploy solar thermal technologies is recovered in less than a year, more than twice as fast as comparable photovoltaic systems.

For now, advocates envision an energy future where solar energy supplements current fossil fuels. But as a sustainable energy infrastructure is built and solar technologies become more reliable and affordable, solar boosters like the German activist Werner Koldehoff are talking about a future in which dwindling fossil fuels are used to back up abundant solar power.

The earth's energy future hinges on "fossil-assisted solar," said Koldehoff. "We have a responsibility to future generations to make this happen."

- George Leopold
EE Times



CDMA is not dead, notes ABI


Although the overall dynamics of CDMA markets are overshadowed by the hype around UMTS/HSPA and the migration to LTE, CDMA operators continue to upgrade their networks to provide capacity for higher numbers of bandwidth-intensive data services, as well as escalating traffic load. This is according to ABI Research's report "Mobile Subscriber Market Data".

"Global EVDO Rev A subscriber numbers ramped up more than eightfold between Q2 07 and Q2 08," says ABI analyst Khor Hwai Lin. "The United States and South Korean markets show the highest growth rate for EVDO Rev A. The increased support for LTE from incumbent CDMA operators does not imply the imminent death of EVDO Rev A and B, because LTE is addressing different market needs compared to 3G."

EVDO Rev A subscribers will exceed 54 million by 2013 while Rev B subscribers will reach 25 million, reports ABI.

Over 31 million subscribers worldwide are already using HSDPA while 3.2 million subscribers were on HSUPA networks by Q2 08. Upgrades to HSUPA continue to take place aggressively around Western Europe and the Asia Pacific. Hence, HSUPA subscribers are estimated to hit 139 million by 2013.

"HSPA+ will contest with LTE and mobile WiMAX in the mobile broadband space," adds Asia-Pacific VP Jake Saunders. The 100Mbit/s download data rate difference between LTE (20MHz) and HSPA+ may not attract mid-tier operators to migrate, as LTE is based on OFDM technology that requires new components, while a move to HSPA+ is perceived to be more gradual transition."

Due to the large number of GSM 900 subscribers and the high possibility of refarming the spectrum for UMTS, ABI estimates that the majority of these global subscribers (about 1.2 billion by 2013) will be on 900MHz-only band. In second place would be dual-band users on 900MHz and 1,800MHz (1 billion by 2013). Subscribers of 2100MHz will ramp up steadily with a CAGR of 23.5 percent between 2007 and 2013.



Samsung leads NAND market ranking in Q2


The NAND flash memory market remained constant in Q2, but Samsung Electronics Co. Ltd's ranking stood as the only profitable supplier during the period, according to iSuppli Corp.

Meanwhile, Intel Corp. is trailing near the back of the pack in the rankings. Samsung stayed at the top post in the NAND share in Q2, followed in order by Toshiba, Hynix Semiconductor Inc., Micron Technology Inc., Intel, Numonyx and Renesas Technology, according to iSuppli.

Global NAND flash memory revenue declined to $3.36 billion in Q2, down 2.5 percent from $3.45 billion in Q1, according to the research firm. "Five of the top seven NAND suppliers had either declines or zero revenue growth during the period," it added.

The NAND flash market has slid sharply at the start of 2008. iSuppli has reduced its 2008 NAND annual flash revenue growth forecast from 9 percent to virtually zero.

Hitting the mark
"Based on the recent rankings, Samsung maintained its number 1 position in the NAND market with 42.3 percent market share recorded during Q2 08," said iSuppli. In the quarter, it added Samsung's NAND sales hit $1.422 billion, down 1.9 percent from the first period. (See rankings table)

Toshiba Corp. holds the second position in the NAND market ranking with 27.5 percent market share recorded during Q2 08, it noted. "During the quarter, Toshiba's NAND sales hit $925 million, down 1.8 percent from the first period," said iSuppli.

"Hynix takes the third spot in the NAND market with 13.4 percent market share recorded during Q2 08," said iSuppli. "Its NAND sales hit $450 million, down 13.1 percent from the first period," it added.

"Micron holds fourth place in the NAND market with 8.9 percent market share recorded during the Q2 08, with NAND sales of $300 million, up 11.9 percent from the first period," it noted.

The fifth post belongs to Intel Corp. in the NAND market with 5.2 percent market share having sales of $174 million, up 4.8 percent from the first period.

iSuppli said Micron narrowed its NAND flash memory market-share gap with Hynix, setting the stage for a battle for the industry's third rank this year.

The research firm believes that as Hynix is seen to concentrate mainly on DRAM throughout 2008, its NAND market share gap with Hynix will be lesser.

''In the memory world, process migrations and wafer scale are two crucial factors in driving down costs,'' said Nam Hyung Kim, analyst, iSuppli, in a statement.

"Micron has rapidly increased its 300mm wafer capacity and the 32nm geometry will boost its profitability in the near future, as well as its productivity. Because of its aggressive production ramp, Micron challenges Hynix in the market's third spot. By 1H 09, it is likely to compete with industry leaders Samsung and Toshiba based on profitability,'' he added.

The DRAM rankings did not change in Q2 08, but Elpida Memory Inc. is gaining ground on Hynix. Amid a major downturn in the sector, Samsung was still in first place in the DRAM rankings for Q2, followed in order by Hynix, Elpida, Micron, Qimonda AG, Powerchip Semiconductor Corp., Nanya Technology Corp., Promos, Etron Technology Inc. and Winbond Electronics Corp., according to iSuppli.

- Mark LaPedus
EE Times



Green trends heat up for next designs


Gone are the days when design was, well, design. Today it's design-for-manufacturability (DFM), design-for-quality, design-for-cost and design-for-environment (DfE).

DfE takes into account the environmental impact of a product from the time of its inception to the end of life, and then back into the resource pool for future products, typically referred to as cradle-to-cradle. "It's a radical departure from the status quo," said Pamela Gordon, lead consultant, Technology Forecasters Inc. (TFI).

"Over the past 50 years, we've moved to a disposable mentality for electronics. The benefits were quick and easy access to new technologies, but we had a buildup of electronic waste. DfE makes the product useful for many years," she added.

Maximizing use
When observing a DfE lens, virtually every aspect of a product is affected, like the size, weight and energy requirements. An important question often asked is, "Are there opportunities for reducing the number of components and consolidating components?" This could save real estate, trim the BOMs and the suppliers' count.

The types of materials for both the product and the packaging are essential. Redesigning the product for ease of disassembly will enable the reusable parts to be taken away at end of life. For those parts that can't be reused, the design has to maximize those that are recyclable to reduce the wastes going to landfill.

Once the product is designed, there are supply chains and logistics issues to consider, such as determining the manufacturing location to reduce the cost and carbon footprint. Another factor is how many miles all the components have to travel before they come together in the final product at the customer's location. One top-tier electronics OEM estimates that the carbon footprint of its supply chain is 20 times that of its own operations.

Seems like a lot to consider? It is, but virtually none of the DfE considerations are inconsistent with the cost or quality requirements of design. In fact, they can contribute in a positive way to both cost and quality.

The Xerox experience
Consider Xerox Corp. that has had a formal environmental commitment since 1991. By applying the principles of DfE to the design of the iGen3, a commercial printing system, the company dramatically improved the environmental impact of the product. More than 90 percent of the parts and subsystems within the machine are either recyclable or can be manufactured again. Eighty percent of the waste produced by the iGen3 is reusable, recyclable or returnable.

Besides Xerox, many other top-tier OEMs are engaged in DfE in one form or another such as Hewlett-Packard, Apple Inc., IBM Corp. and Intel Corp. that all have DfE programs. Among midtier and smaller companies, the rate of adoption has been slow, some saying they are nonexistent. "I don't see companies dealing seriously with DfE," said Michael Kirschner, president, Design Chain Associates, a design-consulting firm based in San Francisco. "There's no real incentive outside of the fear of Greenpeace," he added.

EU compliance
Gordon said: "Most electronics companies have only gone as far as compliance with the European Union's environmental directives on ROHS and WEEE. "DfE is like the quality movement of the 1980s. Those who are slow to embrace the trend did less favorably than those that figured out it produced financial benefit," she added.

There are a few exceptions, however. One is Blue Coat Systems, a high-growth maker of appliance-based solutions that enable IT organizations to optimize security and accelerate performance between users and applications across the enterprise WAN. A year ago, the company gathered a group of hardware engineers and product managers in a room for a DfE workshop and asked them to disassemble some products like Blue Coat products, competitors' products and a benchmark product that had applied DfE principles.

"The exercise was eye opener. We were surprised by how well Blue Coat products measured up to the benchmark product, even though we had not consciously designed for ease of recyclability," said David Cox, VP of operations, Blue Coat. "The exercise made us realize that we had a great opportunity to integrate DfE into the next generation of our product," he added.

Blue Coat created a cross-functional team to explore opportunities for DfE in a next-generation product. One design change they made was in power supplies. The current generation was employing a power supply that was less than 80 percent efficient. Blue Coat set a goal of more than 90 percent efficiency.

Size does matter
The designers chose an open-frame power supply that was smaller and 50 percent lighter, had better heat dissipation and consumed less energy. Because of its smaller size, more units can be packaged in a container, which means lower CO2 emissions per unit during transport. For the next fiscal year, Blue Coat is focusing on environmental initiatives that will save the company in excess of $1 million. "That's a conservative estimate," said Cox. "Employees really care about this," he added. "The beauty of it is that it's a no-brainer. You save the company's money to improve the customer experience and help the environment," he stressed.

- Bruce Rayner
EE Times



Intel's Canmore connects TV, CE devices to Internet


At the Intel Developer Forum, Intel Corp. introduced the Intel Media Processor CE 3100, the first in a new family of purpose-built SoCs for consumer electronics (CE) devices based on the company's popular Intel architecture (IA) blueprint.

Executives also provided updates on the mobile internet device (MID) category and Intel Atom processor, unveiled a brand with DreamWorks Animation SKG Inc. around the shift to 3D movie-making and outlined a number of efforts to speed many-core processor software design.

The CE 3100 has been developed for Internet-connected CE products such as optical media players, connected CE devices, advanced cable STBs and DTVs. The media processor (previously codenamed "Canmore") combines CE features for high-definition video support, home-theater quality audio and advanced 3D graphics, with the performance, flexibility and compatibility of IA-based hardware and software.

Intel expects to begin shipments of this product next month.

Intel and its customers have been working together to develop a variety of products for emerging growth areas—consumer electronics, MIDs, netbooks and embedded computers—each based on Intel architecture that enables uncompromised Internet access.

"As consumers look to stay connected and entertained regardless of where they are and what device they are using, the Web continues to affect our lives in new ways and is quickly moving to the TV thanks to a new generation of Internet-connected CE devices," said Eric Kim, Intel senior VP and general manager of digital home group. "As Intel delivers its first IA SoC with performance and Internet compatibility for CE devices, we are providing a powerful and flexible technology foundation upon which the industry can quickly innovate upon. This technology foundation will help the high-tech industry bring devices to market faster, as well as encourage new designs and inspire new services, such as connecting the TV to the Internet."

Extending IA into consumer electronics
As another SoC product from Intel, the Intel Media Processor CE 3100 is a highly integrated solution that pairs a powerful IA processor core with leading-edge multistream video decoding and processing hardware. It also adds a 3-channel 800MHz DDR2 memory controller, dedicated multichannel dual audio DSPs, a 3D graphics engine enabling advanced UIs and EPGs, and support for multiple peripherals, including USB 2.0 and PCIe.

The Intel Media Processor CE 3100 also features Intel Media Play Technology that combines hardware-based decoding for broadcast TV and optical media playback with software-based decode for Internet content. When a consumer watches broadcast TV or content on optical media players, the video is encoded in standard formats, such as MPEG-2, H.264 or VC-1. Intel Media Play Technology software routes the video to the on-chip hardware decoders. When viewing Internet content, the software automatically routes the video, and audio as applicable, to a software codec running on the IA processor core. As the Internet becomes more omnipresent, the ability to decode multiple video and audio formats will provide the industry with greater flexibility to evolving standards and technologies, and consumers with more viewing experiences.

The Intel Media Processor CE 3100 is scheduled to ship to CE manufacturers.

Additionally, Intel announced the next generation of parallel programming tools that offer new options for multi-core software development for mainstream client applications. The Intel Parallel Studio includes expanded capabilities for helping design, code, debug and tune applications to harness the power of multicore processing through parallel programming. Intel Parallel Studio will ease the path for parallel application development to deliver performance and forward scaling to many-core processors for Microsoft Visual Studio developers.



Saturday, August 2, 2008

Media drives 12% increase in home nets


A new report from IMS Research forecasts that WLANs, xDSL access networks and DSPs will continue to dominate home networking and sees an estimated 12 percent growth in unit sales over the current five-year period.

"The opportunity is for those who can cash in on rising interest in whole home multimedia networks for video and voice," said Tom Hackenberg, an embedded processing research analyst with IMS. "Home data networks are beginning to mature, but multimedia capable whole home networks are still very much an emerging market," he said.

The market for access and LAN devices in the home will rise 11.9 percent on a compound annual basis from 256 million units in 2007 to about 450 million units in 2012, according to a new report from IMS. However, revenues will increase at a statelier 7.6 percent pace from about $10 billion in 2007 to $14.5 billion in 2012, as product prices decline.

"Interestingly security and home automation products will piggyback on the emerging media networks," Hackenberg added.

LAN devices such as routers, bridges and interface cards will see the fastest growth, rising some 17.6 percent on a unit basis over the period to about 265 million units, according to IMS. The percentage of those devices based around wireless nets such as Wi-Fi will grow from about 65 percent today to about 70 percent by 2012, the report projects.

"There also will be significant growth in hybrid wireless/wired home networks over the next five years," said Hackenberg.

Wireless Ethernet links currently make up the second largest number of connections with some 30 million links deployed, but the group is only growing about five percent on a compound annual basis. By contrast, powerline, phoneline and coax links are growing at rates ranging from 28 to 50 percent.

Worldwide, xDSL technologies continue to be the home access net of choice. About 71 percent of home access networks were based on some form of xDSL in 2007, a slice that will decline just slightly to 64 percent by 2012, according to the report.

Under the covers, DSPs will continue to dominate other processor types as the most prevalent in home networking systems. About 500 million DSPs shipped into home net systems in 2007, a figure that will grow to more than 825 million by 2012, IMS predicts. The next two largest categories in digital silicon for home networks are 4- to 8bit microprocessors and ASSPs, roughly tied at a little less than 400 million units each, shipping into home net systems by 2012.

"DSPs will continue to be the cheapest alternative for signal processing jobs that will become increasingly important as home nets move to carrying more voice and video traffic," Hackenberg said.

Home networking systems are undergoing a transition from hard-coded MCUs to low end microprocessors. That's because designers need more performance and flexibility to deal with nets that increasingly sport more bandwidth to link to a growing number of devices, he added.

- Rick Merritt
EE Times



Siemens finds JV partner for enterprise comms biz


Siemens has announced that The Gores Group will acquire a 51 percent stake in its enterprise communications business Siemens Enterprise Communications (SEN). Siemens will retain a stake of 49 percent.

"We have been looking for an opportunity to expand our presence in the enterprise networking and communications space and this partnership with Siemens provides the perfect fit," noted Alec Gores, founder and chairman of Gores.

"We are continuing to intensify the focus of our portfolio on the three sectors, which are energy, industry and healthcare. In Gores, we have found an extremely experienced technology and telecommunications partner, who strengthens the business with the contribution of the two assets Enterasys and SER Solutions," said Joe Kaeser, Siemens chief financial officer. The deal of the joint venture is expected to close at the end of Siemens fiscal year 2008, pending regulatory approval.

Gores and Siemens plan to invest approximately 350 million euros ($544 million) in the joint venture—not including expenditures for R&D and other expenditures as part of the ordinary course of business. The investments will be made in order to launch innovative SEN products on the market, acquire other technology platforms to capitalize on the powerful SEN distribution organization and further drive the expansion and transition of the business from a hardware supplier to a software and service provider.

When the joint venture is launched, SEN business will also be supplemented and strengthened by combining the business with two of Gores' current portfolio companies—Enterasys, a network equipment and security solutions provider and SER Solutions, a call center software company. "Combining the three companies will lead to a more complete enterprise networking and communications offering that will leverage SEN powerful distribution capabilities, global reach and extensive customer base," stated Alec Gores.

On an operational level, business will be driven by Gores but the JV company will be entitled to continue using the Siemens brand. Key patents and licenses will be transferred to the joint venture. Production facilities in Leipzig, Germany, Curitiba, Brazil, Thessaloniki, Greece, will be transferred to the joint venture.



Three giants collaborate on cloud computing


HP, Intel Corp. and Yahoo Inc. have announced the creation of a global, multidata center, open source test bed for the advancement of cloud computing research and education. This initiative is set to highlight collaboration among key personnel in the industry, academia and governments by getting rid of barriers toward rigid research in data-intensive, Internet-scale computing.

The cloud computing test bed of these three giants intends to provide a globally distributed, Internet-scale testing environment that seeks to encourage research on the software, datacenter management and hardware issues associated with cloud computing at a larger scale. This includes strengthening the support research groups of cloud applications and services.

Diverse connection
HP, Intel and Yahoo! have partnered with the Infocomm Development Authority of Singapore (IDA), the University of Illinois at Urbana-Champaign, and the Karlsruhe Institute of Technology in Germany to create a dynamic research initiative. This includes the National Science Foundation as part of the list of reliable partners.

The test bed will initially consist of six "centers of excellence" at IDA facilities, the University of Illinois at Urbana-Champaign, the Steinbuch Centre for Computing of the Karlsruhe Institute of Technology, HP Labs, Intel Research and Yahoo. Each location will host a cloud computing infrastructure, largely based on HP hardware and Intel processors, and will have 1,000 to 4,000 processor cores capable of supporting the data-intensive research associated with cloud computing. The test bed locations are expected to be in full operation and accessible to the number of researchers worldwide through a selection process later this year.

Kick-off strategyThe test bed will seek Yahoo's technical leadership in open source projects by running Apache Hadoop, an open source, distributed computing project of the Apache Software Foundation and other open source, which distribute computing software such as Pig, the parallel programming language developed by Yahoo Research.

"The HP, Intel and Yahoo cloud computing test bed is extending our commitment to the global, collaborative research community, along with the advancement of new sciences in the Internet," said Prabhakar Raghavan, head of Yahoo Research. "This test bed will enable researchers to test applications at the Internet scale and provide them access to the underlying computing systems to advance their learning on how systems software and hardware function in a cloud environment," he added.

Specialized cloud services
Researchers at HP Labs will use the test bed to conduct advanced research in the areas of intelligent infrastructure and dynamic cloud services. HP Labs recently refocuses its strategy to help HP and its customers toward cloud computing, a driving force behind HP's vision of 'Everything as a Service.' With this vision, devices and services can interact through the cloud, and businesses and individuals will use services that cater to their needs based on location, preferences, calendar and communities.

"The technology industry must think about the cloud as a platform for creating new services and experiences. This requires an entirely new approach to the way we design, deploy and manage cloud infrastructure and services," said Prith Banerjee, senior VP of research, HP, and director, HP Labs. "The HP, Intel and Yahoo! Cloud Computing Test Bed will enable us to tap the brightest minds in the industry, as well as other related sectors to share their ideas in promoting innovation," he added.

Intel's participation
"We are willing to engage with the academic research community," said Andrew Chien, VP and director of Intel Research. "Creating large-scale test beds is essential to draw away barriers to innovation and encourage experimentation and learning at scale," he noted.

"With the ready and available Internet-scale resources in Singapore to support cloud computer research and development work, we can collaborate with like-minded partners to advance the field," said Khoong Hock Yun, assistant chief executive, infrastructure development group, Infocomm. "Cloud computing is the next paradigm shift in computer technology, and this may be the next 'platform' for innovative ecosystems. This will allow Singapore to leverage this new paradigm for greater economic and social growth," he added.

In November 2007, Yahoo announced the deployment of a supercomputing-class datacenter, called M45, for cloud computing research. Carnegie Mellon University was the first institution to take advantage of this supercomputer. Yahoo also said this year an agreement with Computational Research Laboratories to jointly support cloud-computing research and make one of the 10 fastest supercomputers in the world available to academic institutions in India.

High-performance innovations
In 2008, HP announced the formation of its Scalable Computing & Infrastructure Organization (SCI), which includes a dedicated set of resources that provide expertise and spearhead development efforts to build scalable solutions designed for high-performance and cloud computing customers. The company introduced scalable computing offerings including the Intel Xeon-based HP ProLiant BL2x220c G5, the first server blade to combine two independent servers in a single blade, and the HP StorageWorks 9100 Extreme Data Storage System (ExDS9100), a highly scalable storage system designed to simplify the management of multiple petabytes. HP also introduced the HP Performance-Optimized Datacenter, an open architecture, compact and shipped-to-order alternative for deploying IT resources.



Compact housing defines UHF FM transmitter module


Radiometrix's TX2S is a miniature PCB-mounted UHF radio transmitter for the 433.92MHz (UK) or 434.42MHz (European) radio bands.

Contained within a compact package, measuring 20mm x 10mm x 2mm, the TX2S allows design engineers to create a simple data link (either with a node-to-node, or multi-node architecture), which is capable of supporting rates of up to 40Kbit/s at distances of as much as 75m in-building and 300m across open ground.

The crystal-based PLL controlled FM transmitter operates off an input voltage of between 2.2 and 4V (allowing it to be used in portable, battery-powered system designs) and delivers nominally +0dBm at 7mA. The transmitter module is approved to the EN 300 220-3 and EN 301 489 standards. Internal filtering helps to ensure EMC levels are kept low by minimizing spurious radiation.

Key applications include car/building security systems, EPOS monitoring, inventory tracking, remote industrial process control, and computer networking. Price is about $15 each in quantities between 1 and 99.

- Ismini Scouras
eeProductCenter



Multithreading comes undone


EDA vendors have struggled to meet the challenge of multicore IC design by rolling out multithreading capabilities for their tools. Nonetheless, the question cannot be ignored: Is multithreading the best way to exploit multicore systems effectively?
"Threads are dead," asserted Gary Smith, founder and chief analyst for Gary Smith EDA. "It is a short-term solution to a long-term problem."

At the 45nm node, more and more designs reach and exceed the 100 million-gate mark. These designs break current IC CAD tools, forcing EDA vendors to develop products capable of parallel processing.

Until now, parallel processing has relied on threading. Threading, however, tends to show its limits at four processors, and EDA vendors may have to come up with new ways of attacking the problem.

"Threads will only give you two or three years," Smith said. "Library- or model-based concurrency is the best midterm approach."

Looking into the future
EDA vendors interviewed at the 2008 Design Automation Conference (DAC) painted a more nuanced picture of the future of multithreading.

"We have not seen the limits to multithreading in the timing-analysis area," said Graham Bell, marketing counsel for EDA startup Extreme DA Corp. "We see good scaling for three or four process threads. We get to see difficulties beyond that, but they are not dramatic."

With Extreme DA's GoldTime, a multithreaded static and statistical timing analyzer, the company has applied a fine-grained multithreading technique based on ThreadWave, a netlist-partitioning algorithm. "Because of our unique architecture, we have a small memory footprint," Bell said. "We have not seen the end of taking advantage of multithreading."

For applications with a fine-grained parallelism, multithreading is one of the most generic ways to exploit multicores, said Luc Burgun, CEO of Emulation and Verification Engineering SA. "On the other hand, multithread-based programs can also be quite difficult to debug." That's because they "break the sequential nature of the software execution, and you may easily end up having nondeterministic behavior and a lot of headaches."

According to Burgun, multiprocess remains the "easiest and safest way to exploit multicore." He said he expects some interesting initiatives to arise from parallel-computing experts to facilitate multicore programming. "From that standpoint, CUDA [the Nvidia-developed Compute Unified Device Architecture] looks very promising," Burgun said.

Simon Davidmann, president and CEO of Imperas Ltd, delivered a similar message. "Multithreading is not the best way to exploit multicore resources," he said. "For some areas, it might be OK, but in terms of simulation, it is not."

Multithreading is not the only trick up Synopsys Inc.'s sleeve, said Steve Smith, senior director of product platform marketing. "Within each tool, there are different algorithms. When looking at each tool, we profile the product to see the largest benefits to multithreading," he said. "Multithreading is not always applicable. If not, we do partitioning."

As chipmakers move to eight and 16 cores, a hybrid approach will be needed, asserted Smith, suggesting a combination of multithreading and partitioning.

To illustrate the point, Smith cited a host of Synopsys' multicore solutions in the area of multithreading, "HSpice has been broadly used by our customers. This is typically the tool you do not want to start from scratch," he said.

HSpice multithreading has come in stages, noted Smith. "Last year, we multithreaded the model-evaluation piece, and it gave a good speedup. Then, in March, we introduced the HSpice multithreaded matrix solver. We want to make sure our customers are not impacted, and we do it [multithreading] piece by piece," he said.

Another trend Synopsys is investigating, Smith continued, is pipelining. This technique—an enterprise-level activity, since it demands the involvement of IT—collapses multiple tasks, such as optical proximity correction and mask-data preparation, into a single pipeline.



HSpice multithreading has come in stages.



Last year, Magma Design Automation Inc. unveiled an alternative to multithreading, using a streaming-data-flow-based architecture for its Quartz-DRC design rule checker. Multithreading provides a less-fine-grained parallel-processing capability than Magma's data flow architecture, said Thomas Kutzschebauch, senior director of product engineering at Magma.

Magma's multicore strategy is focused on massive parallelism, Anirudh Devgan, VP and general manager of the custom design business unit, said at a DAC panel session on reinventing EDA with "manycore" processors.

"Four CPU boxes are just the beginning of a trend, and EDA software has to work on large CPUs with more than 32 cores," he said. "Parallelism offers an opportunity to redefine EDA productivity and value. But just parallelism is not enough, since parallelizing an inefficient algorithm is a waste of hardware."

Devgan's conclusion was that tools have to be productive, integrated and massively parallel.

Seeing beyond C
As he unveiled "Trends and What's Hot at DAC," Smith expressed doubts about C as the ultimate language for multicore programming. He cited the identification of a new embedded-software language as one of the top 10 issues facing the industry this year, and asserted, "a concurrent language will have to be in place by 2015."

EDA executives did not debate the point. "We will change language over time," stated Joachim Kunkel, VP and general manager of the solutions group at Synopsys. "We are likely to see a new language appear, but it takes time. It is more an educational thing."

On the software side, meanwhile, reworking the legacy code is a big issue, and writing new code for multicore platforms is just as difficult. Nonetheless, Davidmann held that "the biggest challenge is not writing, reworking or porting code, but verifying that the code works correctly, and when it doesn't, figuring out how to fix it. Parallel processing exponentially increases the opportunities for failure."

Traditionally, Davidmann said, software developers think sequentially. Now, that has to change. Chip design teams have been writing parallel HDL for 20 years, so it's doable—though it will take much effort and new tool generations to assist software teams in this task.

With single-processor platforms and serial code, functional verification meant running real data and tests directed to specific pieces of functionality, Davidmann said. "Debug worked as a single window within a GNU project debugger."

But with parallel processing, "running data and directed tests to reduce bugs does not provide sufficient coverage of the code," he said. "New tools for debug, verification and analysis are needed to enable effective production of software code."

Davidmann said Imperas is announcing products for verification, debug and analysis of embedded software for heterogeneous multicore platforms. "These tools have been designed to help software development teams deliver better-quality code in a shorter period of time," he said.

To simplify the software development process and help with the legacy code, Burgun said customers could validate their software running on the RTL design emulated in EVE's ZeBu. It behaves as a fast, cycle-accurate model of the hardware design.

For instance, he continued, some EVE customers can run their firmware and software six months prior to tapeout. They can check the porting of the legacy code on the new hardware very early and trace integration bugs all the way to the source, whether in software or in hardware. When the engineering samples come back from the fab, 99 percent of the software is already validated and up and running.

Thus, "ZeBu minimizes the number of respins for the chip and drastically reduces the bring-up time for the software," Burgun said.

- Anne-Francoise Pele
EE Times



Google