October 9, 2011

State of Cad and Engineering Workstation Technologies

Abbreviations

Cad is Computer Aided Design Cae is Computer Aided Engineering Cew is Computer aided establish and Engineering Workstation Cpu is Central Processing Unit Gpu is Graphics Processing Unit

Logic Controller

programmable logic controller

Hardware for Cpu-Intensive Applications

Computer hardware is designed to keep software applications and it is a common but simplistic view that higher spec hardware will enable all software applications to achieve better. Up until recently, the Cpu was truly the only expedient for computation of software applications. Other processors embedded in a Pc or workstation were dedicated to their parent devices such as a graphics adapter card for display, a Tcp-offloading card for network interfacing, and a Raid algorithm chip for hard disk redundancy or capacity extension. However, the Cpu is no longer the only processor for software computation. We will justify this in the next section.

Legacy software applications still depend on the Cpu to do computation. That is, the common view is valid for software applications that have not taken benefit of other types of processors for computation. We have done some benchmarking and believe that applications like Maya 03 are Cpu intensive.

For Cpu-intensive applications to achieve faster, the normal rule is to have the highest Cpu frequency, more Cpu cores, more main memory, and perhaps Ecc memory (see below).

Legacy software was not designed to be parallel processed. Therefore we shall check determined with the software vendor on this issue before expecting multiple-core Cpus to yield higher performance. Irrespectively, we will achieve a higher output from executing multiple incidences of the same application but this is not the same as multi-threading of a single application.

Ecc is Error Code Detection and Correction. A memory module transmits in words of 64 bits. Ecc memory modules have incorporated electronic circuits to detect a single bit error and accurate it, but are not able to rectify two bits of error happening in the same word. Non-Ecc memory modules do not check at all - the theory continues to work unless a bit error violates pre-defined rules for processing. How often do single bit errors occur nowadays? How damaging would a single bit error be? Let us see this quotation from Wikipedia in May 2011, "Recent tests give widely varying error rates with over 7 orders of magnitude difference, ranging from 10−10−10−17 errors/bit-hour, approximately one bit error per hour per gigabyte of memory to one bit error per century per gigabyte of memory."

Hardware for Gpu-Intensive Applications

The Gpu has now been developed to gain the prefix of Gp for normal Purpose. To be exact, Gpgpu stands for normal Purpose computation on Graphics Processing Units. A Gpu has many cores that can be used to accelerate a wide range of applications. Agreeing to Gpgpu.org, which is a central reserved supply of Gpgpu news and information, developers who port their applications to Gpu often achieve speedups of orders of magnitude compared to optimized Cpu implementations.

Many software applications have been updated to capitalize on the newfound potentials of Gpu. Catia 03, Ensight 04 and Solidworks 02 are examples of such applications. As a result, these applications are far more sensitive to Gpu resources than Cpu. That is, to run such applications optimally, we should spend in Gpu rather than Cpu for a Cew. Agreeing to its own website, the new Abaqus stock suite from Simulia - a Dassault Systemes brand - leverages Gpu to run Cae simulations twice as fast as original Cpu.

Nvidia has released 6 member cards of the new Quadro Fermi house by April 2011, in ascending sequence of power and cost: 400, 600, 2000, 4000, 5000 and 6000. Agreeing to Nvidia, Fermi delivers up to 6 times the execution in tessellation of the former house called Quadro Fx. We shall equip our Cew with Fermi to achieve optimum price/performance combinations.

The inherent offering of the Gpu to execution depends on an additional one issue: Cuda compliance.

State of Cuda Developments

According to Wikipedia, Cuda (Compute Unified expedient Architecture) is a parallel computing architecture developed by Nvidia. Cuda is the computing motor in Nvidia Gpu accessible to software developers through variants of industry-standard programming languages. For example, programmers use C for Cuda (C with Nvidia extensions and certain restrictions) compiled through a PathScale Open64 C compiler to code algorithms for execution on the Gpu. (The most recent carport version is 3.2 released in September 2010 to software developers.)

The Gpgpu website has a preview of an interview with John Humphrey of Em Photonics, a pioneer in Gpu computing and developer of the Cuda-accelerated linear algebra library. Here is an passage of the preview: "Cuda allows for very direct expression of exactly how you want the Gpu to achieve a given unit of work. Ten years ago I was doing Fpga work, where the great promise was the automated conversion of high level languages to hardware logic. Needless to say, the huge abstraction meant the result wasn't good."

Quadro Fermi house has implemented Cuda 2.1 whereas Quadro Fx implemented Cuda 1.3. The newer version has in case,granted features that are significantly richer. For example, Quadro Fx did not keep "floating point atomic additions on 32-bit words in shared memory" whereas Fermi does. Other celebrated improvements are:

Up to 512 Cuda cores and 3.0 billion transistors Nvidia Parallel DataCache technology Nvidia GigaThread engine Ecc memory support Native keep for optical Studio

State of Computer Hardware Developments

Abbreviations

Hdd is Hard Disk Drive Sata is Serial At Attachment Sas is Serial Attached Scsi Ssd is Solid State Disk Raid is Redundant Array of cheap Disks Nand is memory based on "Not And" gate algorithm

Bulk storehouse is an principal part of a Cew for processing in real time and archiving for later retrieval. Hard disks with Sata interface are getting bigger in storehouse size and cheaper in hardware cost over time, but not getting faster in execution or smaller in corporal size. To get faster and smaller, we have to select hard disks with Sas interfaces, with a major compromise on storehouse size and hardware price.

Raid has been colse to for decades for providing redundancy, expanding the size of volume to well beyond the confines of one corporal hard disk, and expediting the speed of sequential reading and writing, in single random writing. We can deploy Sas Raid to address the large storehouse size issue but the hardware price will go up further.

Ssd has turned up recently as a spicy star on the horizon. It has not supplanted Hdd because of its high price, limitations of Nand memory for longevity, and immaturity of controller technology. However, it has found a place recently as a Raid Cache for two leading benefits not achievable with other means. The first is a higher speed of random read. The second is a low cost point when used in conjunction with Sata Hdd.

Intel has released Sandy Bridge Cpu and chipsets that are carport and bug free since March 2011. theory computation execution is over 20% higher than the former generation called Westmere. The top Cpu model has 4 editions that are officially capable of over-clocking to over 4Ghz as long as the Cpu power consumption is within the designed limit for thermal consideration, called Tdp (Thermal establish Power). The 6-core edition with legal over-clocking will come out in June 2011 timeframe.

CurrentState & Foreseeable Future

Semiconductor manufacturing technology has improved to 22 x 10-9 metres this year 2011and is heading towards 18 nanometres in 2012. Smaller means more: we will get more cores and more power from a new Cpu or Gpu made on advancing nanotechnology. The current laboratory probe limit is 10-18and this sets the headroom for semiconductor technologists.

While Gpu and Cuda are having big impacts on execution computing, the dominant Cpu manufacturers are not resting on their laurels. They have started to integrate their own Gpu into the Cpu. However, the level of integration is a far cry from the Cuda world and integrated Gpu will not displace Cuda for establish and engineering computing in the foreseeable future. This means our current institution as described above will remain the prevailing format for accelerating Cad, Cae and Cew.

End

State of Cad and Engineering Workstation Technologies

programmable logic controller

Wireless USB Adapter Reviews