Low-Priced NVIDIA Quadro-based GPU Instances for Deep Learning Inference to be Launched

Share Article

California startup Pegara, Inc. launched the Low-Priced NVIDIA Quadro-based GPU Instances for Deep Learning Inference through its "GPU EATER" heterogeneous cloud computing service.

News Image

California startup Pegara, Inc. launched the Low-Priced NVIDIA Quadro-based GPU Instances for Deep Learning Inference through its "GPU EATER" heterogeneous cloud computing service.

Nvidia's data center business hits $2 billion annual revenue for its fiscal year ending in 2018, showing rapid growth at five times the level for the fiscal year ending in 2016. In addition, the company projects that the global market for data center business will reach $30 billion annual revenue in 2023 (*1) and that the market will continue to expand significantly in the future. Strongly expected to accompany this trend is a significant increase in revenue of GPU-based cloud computing power.

Currently, major cloud companies such as Amazon Web Services (AWS) offer instances based on NVIDIA TESLA GPUs, and the usage of such services for Deep Learning Inference requires that they are run continuously without stoppage. In the case of AWS, however, users are required to pay approximately $658.80 per month (Linux on p2.xlarge), a cost which can be prohibitive for small-scale research facilities, universities as well as research and development personnel at smaller enterprises.

-- Reasons that you must try NVIDIA Quadro for Deep Learning Inference --

Our research, however, has revealed that with Deep Learning Inference for tasks such as Image Classification and Object Recognition, it is far more common for the CPUs and the web API implementation to act as a bottleneck, rather than the GPUs. Therefore, we have concluded that in many cases, it is possible to increase cost effectiveness over machines with expensive CPUs and GPUs by utilizing multiple machines combining moderate CPUs with lower-specification GPUs.

The offering of this new service is based on these results and repeated discussions with Deep Learning researchers and developers, through which we were able to confirm the demand for less-expensive environments for performing prediction and reasoning tasks.

In addition, a low-cost environment is also ideal for beginners in the field of deep learning. The use of AWS requires the instance to be stopped when the learning tasks are complete, but there are many reports of users forgetting to stop their instances and incurring high charges as a result. With a low-cost cloud service, the per-hour charge to the user is lower, meaning that even if a user forgets to stop an instance, it is unlikely that they will be faced with a bill anything like what they would see with AWS, meaning that even beginners who are unfamiliar with cloud services can feel good about using it.

Fees start at about $72 per month (roughly $0.0992 per hour), with no startup fees. Users can select from four types of NVIDIA GPU, Quadro P400, P600, P1000 and P4000, with instance machines featuring SSDs running Ubuntu and CentOS.

In the future, Pegara also plans to offer enterprise services as well as computing resource cloud services built around various types of devices in addition to GPUs, such as FPGAs and ASICs.

*1: See the NVIDIA 2018 Investor Day Presentation.
*2: Estimated amount for on-demand usage of an AWS NVIDIA Tesla K80 instance (p2.xlarge).

Share article on social media or email:

View article via:

Pdf Print

Contact Author

Shunsuke Ichihara
Pegara
+1 9494078122
Email >
@gpueater
Follow >
GPU EATER
Like >
Visit website