아시아뉴스통신

뉴스홈 전체기사 정치 산업ㆍ경제 사회 국제
스포츠 전국 연예·문화 종교 인터뷰 TV

Implementing High-Performance Computing for Big Data Processing

  • [아시아뉴스통신] Ian Maclang 기자
  • 송고시간 2018-02-27 20:30
  • 뉴스홈 > 국제
Photo by: Tumisu via Pixabay
 

Not all companies who work with big data have adopted high-performance computing but almost all have embraced Hadoop-style analytics computing, according to Mary Shacklett of TechRepublic.


Both HPC and Hadoop employ parallel data processing but data storage is centralized in the former while the latter relies on commodity hardware to store data.HPC also requires costly networking communication equipment that has a high throughput and low latency.Using Hadoop may be cheaper and may be run in the cloud but that is not possible with many scientific institutions that need HPC for processing big data.


For organizations who plan to migrate to HPC for processing big data, there are several factors to consider.


There must be support for HPC from senior management and the board, who must have sufficient understanding about HPC and what it can do for their companies.They must know why HPC is different from ordinary analytics and why it needs expensive hardware and software.They must also know why it is necessary to use HPC to meet business objectives.


Organizations planning to adopt HPC must be able to acquire pre-configured hardware that can be customized.There are companies that offer pre-configured HPC hardware which can be customized to suit the need of clients.


The cost of adopting HPC must be justified by developing a return-on-investment that would please senior management and the board.For instance, a jet manufacturer realizes that it no longer had to rent physical wind tunnels when it was able to run design simulations using HPC and obtain 99.999-percent accuracy.The company was also able to recover its sizable HPC investment in a short time.



The IT staff that will oversee HPC computing should be trained properly for them to be comfortable in using this technology.Outside consultants can be hired to train and transfer knowledge to the staff.The IT staff must have a data scientist who can develop the algorithms to run HPC, an experienced systems programmer knowledgeable in C+ and Fortran and a network communications specialist, both of whom should be able to work in a parallel processing environment.