One of the main limits of CFD approach is the heavy computational load. Beside the algorithmic choices, until some years ago the only way of reducing time demands have been distributing runs over a large amount of CPUs. Supercomputer solutions were born which are made by thousands of nodes; at first they were concentrated only in national research centers, but nowadays they are the standard hardware choice even for small companies. Parallelization techniques have been set-up making use of both shared (mainly OpenMP-based) and distributed (mainly MPI-based) memory approaches.
In recent years new hardware configurations have been introduced based on GPU (Graphic Processing Unit) architecture. As its name highlights, this kind of processors comes from the digital graphics framework, but the actually common solutions are devoted to numerical calculations exclusively. Ad hoc libraries (e.g. CUDA®) are born that are able to fully exploit GPU potentialities. At the present a wide porting activity is developing aimed at transfering standard simulation codes on this new platform. This solution is at the cutting-edge of technology and the available information imply that fluidynamic simulations (and in general numerical studies) will be sped up considerably in a short term vision.
DOFWARE is investing resources in both the directions. It is expanding a propietary HPC computational cluster where most RAM demanding simulations are distributed (standard CFD) and in the meantime investigating CLOUD offering. At the same time GPU cards are used for computations exploiting their large number of cores (typically CFD meshless simulations). Several evaluation tests have been made up and are continuously updated in order to gain a deep insight of the hardware and exploit it in the best way. At the present a further promising approach is being studied, which combines CPU and GPU capacities in an hydrid way: in-house analyses are being conducted in this direction too.