Objective
To understand, innovate and manage problems through computer simulations
Large Scale Simulation Research laboratory try to build capacity to perform large scale computer simulation with the aim to solve high impact problems.
Understand
Knowledge discovery: Simulation is an important tool for new knowledge discovery that is important for building advanced technological capacity, economical competitiveness and quality of life.
Innovate
Engineering design: Simulation is an important tool for engineering design to enable new and improved products, and, better and more efficient production processes.
Manage problems
Simulation is an important tool for managing and solving complex problems such as environmental problems and natural disasters by forecasting effects of involving factors
Historical background
Formerly LSR was a part of Computer Technology Division (RDC), which was split to KEA and LSR after the restructuring of NECTEC in 2006. Two subdivisions of RDC combined and formed LSR. These are Computational Science and Engineering subdivision (RDC3) and Cluster Computing Technology subdivision (RDC4).
Main activities
- High performance computing and large scale data storage resource building
- Research and application of virtualization technology
- Develop infrastructure and software for information integration
- Simulation tools (programs) development
- Application of simulation for knowledge discovery, innovation and problem management
Relevant technologies
- Cluster computing, Grid computing, Large scale storage
- Distributed processing: Web Services, XML, Java Programming
- Numerical techniques: FEM and CFD
Current projects
- Development of Finite Element Toolkit
- Developing programs and computing techniques for applying Finite Element Method (FEM) and Computational Fluid Dynamics (CFD) for solving industrial and environmental problems
- Period 4 years (2006-2009)
- Main outputs are programs for FEM and CFD calculations, and, knowledge and techniques for performing FEM and CFD calculations of physical phenomena.
- Information Grid
- Applying Grid computing technology for information integration to enable effective use in problem analysis and solution.
- Period 4.5 years (2005-2009)
- Main outputs are framework, architecture and prototype system for information integration.
- Computational Science and Engineering Capacity Building
- Promoting development of CSE through 3 activities: research networking, computing resource sharing, knowledge sharing
- Period 4 years (2006-2009)
- Main outputs are stronger collaboration and higher research productivity in the areas of computational science and engineering (CSE).
- Development of Computing Infrastructure (Phase 2)
- Developing and providing computing resource for CSE research
- Period 5 years (2008-2012)
- Main output is the high performance computing resource service available for researchers working in Thailand.
- EuAsiaGrid
- Aiming to encourage collaborative approaches across scientific disciplines and communities using existing European experience in building grid infrastructures and scientific applications.
- Period 2 years
- Main output is the provision of Grid based e-Infrastructure through national and international collaborations.
Achievements
Finite Element Method Program
We have developed a program for performing FEM calculations with the following capability.
- Parallel processing: The program use MPI programming model to perform the calculation in parallel. When the program is used on a single core single processor computer, it can run sequentially using one process. When the program is used on a multicore machine, it can run in parallel mode using multiple processes allowing faster simulation. When the program is used on a cluster of machines connected by a network, it can run in parallel and distributed mode using multiple processes and distribute data to multiple machines allowing faster and larger simulation.
- Multiple physics: The program can analyze many physical phenomena including linear elasticity, steady state electricity, heat transfer analysis, and, piezoelectric.
- Modern storage algorithm: It is well known that the FEM problems often result in largely linear equation. In the development process, we selected the modern algorithm to store the stiffness matrix, this is CRS/CCS (Compressed Row/Column Storage) scheme. The method store only non-zero values of stiffness matrix element which result in economic memory usage. The efficiency of algorithm is approved by the test cases which use large number of elements, which can run smoothly in our software but unsuccessful in commercial software (dead cause from insufficient memory).
- Faster numerical solver: In FEM, the most time consuming is the process for solving large linear algrebraic equation. The problems which consist of more than 100,000 nodes often process the solution for many hours. In our software we solved the linear eqations by using modern iteratative method such as CG, BiCG, GMRES, GPBiCG with various preconditioner such as SSOR, ILU, IC, DIAG. These result in faster solver which confirm by compare period of time with the commercial software.
- High accuracy: The accuracy is confirmed by compare with the commercial FEM software. The comparison examples are tested by various cases.The results shown perfect agreement of the two solutions.
Computational Fluid Dynamics Program
We have developed a program for performing CFD calculations with the following capability.
- Parallel processing: The program use MPI programming model to perform the calculation in parallel. When the program is used on a single core single processor computer, it can run sequentially using one process. When the program is used on a multicore machine, it can run in parallel mode using multiple processes allowing faster simulation. When the program is used on a cluster of machines connected by a network, it can run in parallel and distributed mode using multiple processes and distribute data to multiple machines allowing faster, larger and finer resolution simulation.
- Direct numerical simulation for computational fluid dynamics: The program performs direct numerical simulation without using turbulence model. This however makes the program consume much more computing resources than common commercial CFD programs but the program provides more accurate solutions that do not depend on the validity of the turbulence models.
- Direct numerical simulation for computational heat transfer: The program performs direct numerical simulation for the convective heat transfer process without using turbulence model. This allows us to capture and measure correct heat transfer characteristics especially for flows in transition and turbulent regimes.
Computing Resources
We have successfully developed and provided computing resource service to researchers in Thailand. We have provided the following computing resources.
- Ocean Cluster: This is our newest cluster with four nodes consisting of 64 cores of Intel Xeon processor X7350 2.93GHz and 4GB of memory/core. Total memory is 256GB. The nodes are connected to each other via an infiniband DDR switch at 20Gbps full-speed. The cluster has an HP Polyserve high performance virtualization SAN storage with capacity of 4.8TB. The performance of this system is 750GFLOPS (Rpeak) and 400GFLOPS (Rmax)
- Itanium Cluster: The high performance cluster based on fully 64 bits Itanium 2 processor. The system consists of 32 nodes, 64 CPUs, Itanium 2 1.4GHz with 256GB of total memory. The interconnection is both infiniband and gigabit ethernet. The total storage is 3TB. The performance of this system is 358GFLOPS (Rpeak) and 200GFLOPS (Rmax)
- Origin 2000: A Symmetric Multi Processing computer consists of 8 processors-- MIPS R10000 (250MHz) and R12000 (300MHz), with 2 GB RAM, IRIX64 origin 6.5.
- Cappuccino Cluster: The computing cluster for material science simulation under Computational Nanoscience Consortium (CNC). The system consists of 26 cores of intel Xeon 5140 processors with 52GB of memory and connects through infiniband 10Gbps.
- Data Storage: The large storage for geoscience and related fields. Total capacity is 10TB.
- CFD Databank: The databank is aimed to serve both national CFD communities and international CFD communities. For archive data from CFD communities for model development, benchmarking or verification of codes.
Information Grid
We have developed a framework for integrating information with following capacities:
- Horizontal-based integration of information across heterogeneous and distributed repositories, mostly relational databases. Specifically, each repository joining Information Grid must deploy an information service that is able to provide information based on a particular, standard, metadata schema. The heterogeneity of data schema across repositories can be handled using GiSTool, a tool that allows users to manually map fields in local schema to fields in standard schema and then automatically convert information described in local schema to be described in standard schema. Finally, Information Grid would simply integrate standard information from different repositories.
- Discovery of information across heterogeneous and distributed repositories, mostly relational databases. Specifically, an application that needs information from different repositories for a particular purpose can be simply developed based on Information Grid API. This allows such application to not only connect Information Grid network but also be able to request information as desired via the specification of query condition on standard metadata schema that describes information of choice. In addition, such application can get information in either all at once or gradual manner.
Collaborators and Partners
Grid
PRAGMA | APGrid PMA | Thai National Grid Project |
AIST | Academia Sinica | EU-Asia Grid Project |
KMITL (FEM) | CSEA | NANOTEC |
MTEC | CFD grid | KMUTT (CFD) |
Information Grid
STKS (TIAC) | KU (NAIST) | HLT |
ITS | NRCT |
Environment
GEO Grid | DMR | HAII |
Others
SWU (Muscle Model) | TGIST |