Amazon EC2 All Instance Details in Single Page

In Amazon EC2 Instance Types in Single Page we saw all the instances configuration. In Amazon EC2 All Instance Details in Single Page we will see all the Instance details like where we can use them and how it will be helpful.

Bottom

General PurposeMac, T4g, T3, T3a, T2, M6g, M5, M5a, M5n, M5zn, M4, A1

Compute OptimizedC6g, C6gn, C5, C5a, C5n, C4

Memory OptimizedR6g, R5, R5a, R5b, R5n, R4, X2gd, X1e, X1, High Memory, z1d

Accelerated ComputingP4, P3, P2, Inf1, G4dn, G4ad, G3, F1

Storage OptimizedI3, I3en, D2, D3, D3en, H1

General Purpose

General purpose instances provide a balance of compute, memory and networking resources, and can be used for a variety of diverse workloads. These instances are ideal for applications that use these resources in equal proportions such as web servers and code repositories. 

Mac

Amazon EC2 Mac instances enable customers to run on-demand macOS workloads in the cloud for the first time, extending the flexibility, scalability, and cost benefits of AWS to all Apple developers. With EC2 Mac instances, developers creating apps for iPhone, iPad, Mac, Apple Watch, Apple TV, and Safari can provision and access macOS environments within minutes, dynamically scale capacity as needed, and benefit from AWS’s pay-as-you-go pricing.

Powered by AWS Nitro System, EC2 Mac instances are built on Apple Mac mini computers featuring Intel Core i7 processors, and offer customers a choice of macOS Mojave (10.14), macOS Catalina (10.15), and macOS Big Sur (11.2.1).

Features:

  • Intel core i7 processors with 3.2 GHz (4.6 GHz turbo)
  • 6 physical / 12 logical cores
  • 32 GiB of memory
  • Instance storage is available via Amazon Elastic Block Store (EBS)
  • Mac instances are dedicated, bare-metal instances which are accessible in the EC2 console as dedicated hosts

Use Cases : Developing, building, testing, and signing iOS, iPadOS, macOS, WatchOS, and tvOS applications on the Xcode IDE

To Check the Instances types which comes under this Click here.

T4g

Amazon EC2 T4g instances are powered by Arm-based AWS Graviton2 processors. T4g instances are the next generation low cost burstable general purpose instance type that provide a baseline level of CPU performance with the ability to burst CPU usage at any time for as long as required. They deliver up to 40% better price performance over T3 instances and are ideal for running applications with moderate CPU usage that experience temporary spikes in usage.

T4g instances offer a balance of compute, memory, and network resources for a broad spectrum of general purpose workloads including large scale micro-services, small and medium databases, virtual desktops, and business-critical applications. Developers can also use these instances to run code repositories and build Arm-based applications natively in the cloud, eliminating the need for cross-compilation and emulation, and improving time to market.

Features:

  • Free trial for t4g.micro instances for up to 750 hours per month until March 31st, 2021.
  • Burstable CPU, governed by CPU Credits, and consistent baseline performance
  • Unlimited mode by default to ensure performance during peak periods and Standard mode option for a predictable monthly cost
  • Custom built AWS Graviton2 Processor with 64-bit Arm Neoverse cores
  • EBS-optimized by default
  • Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hyperviso

Use Cases: Micro-services, low-latency interactive applications, small and medium databases, virtual desktops, development environments, code repositories, and business-critical applications.

To Check the Instances types which comes under this Click here.

T3

T3 instances are the low cost burstable general purpose instance type that provide a baseline level of CPU performance with the ability to burst CPU usage at any time for as long as required. T3 instances are designed for applications with moderate CPU usage that experience temporary spikes in use.

T3 instances offer a balance of compute, memory, and network resources and are a very cost effective way to run a broad spectrum of general purpose workloads including large scale micro-services, small and medium databases, virtual desktops, and business-critical applications. T3 instances are also an affordable option to run your code repositories and development and test environments.

Features:

  • Burstable CPU, governed by CPU Credits, and consistent baseline performance
  • Unlimited mode by default to ensure performance during peak periods and Standard mode option for a predictable monthly cost
  • Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
  • AWS Nitro System and high frequency Intel Xeon Scalable processors result in up to a 30% price performance improvement over T2 instances

Use Cases: Micro-services, low-latency interactive applications, small and medium databases, virtual desktops, development environments, code repositories, and business-critical applications

To Check the Instances types which comes under this Click here.

T3a

T3a instances are the next generation burstable general-purpose instance type that provide a baseline level of CPU performance with the ability to burst CPU usage at any time for as long as required. T3a instances offer a balance of compute, memory, and network resources and are designed for applications with moderate CPU usage that experience temporary spikes in use. T3a instances deliver up to 10% cost savings over comparable instance types.

T3a instances accumulate CPU credits when a workload is operating below baseline threshold. Each earned CPU credit provides the T3a instance the opportunity to burst with the performance of a full CPU core for one minute when needed. T3a instances can burst at any time for as long as required in Unlimited mode.

Features:

  • AMD EPYC 7000 series processors with an all core turbo clock speed of 2.5 GHz
  • Burstable CPU, governed by CPU Credits, and consistent baseline performance
  • Unlimited mode by default to ensure performance during peak periods and Standard mode option for a predictable monthly cost
  • Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor

Use Cases: Micro-services, low-latency interactive applications, small and medium databases, virtual desktops, development environments, code repositories, and business-critical applications

To Check the Instances types which comes under this Click here.

T2

T2 instances are a low-cost, general purpose instance type that provides a baseline level of CPU performance with the ability to burst above the baseline when needed. With On-Demand Instance prices starting at $0.0058 per hour, T2 instances are one of the lowest-cost Amazon EC2 instance options and are ideal for a variety of general-purpose applications like micro-services, low-latency interactive applications, small and medium databases, virtual desktops, development, build and stage environments, code repositories, and product prototypes.

Features:

  • High frequency Intel Xeon processors
  • Burstable CPU, governed by CPU Credits, and consistent baseline performance
  • Low-cost general purpose instance type, and Free Tier eligible*
  • Balance of compute, memory, and network resources

Use Cases: Websites and web applications, development environments, build servers, code repositories, micro services, test and staging environments, and line of business applications. 

To Check the Instances types which comes under this Click here.

M6g

Amazon EC2 M6g instances are powered by Arm-based AWS Graviton2 processors. They deliver up to 40% better price performance over current generation M5 instances and offer a balance of compute, memory, and networking resources for a broad set of workloads. They are the best choice for applications built on open-source software such as application servers, microservices, gaming servers, mid-size data stores, and caching fleets. Developers can also use these instances to build Arm-based applications natively in the cloud, eliminating the need for cross-compilation and emulation, and improving time to market.

M6g instances are also available with local NVMe-based SSD block-level storage option (M6gd) for applications that need high-speed, low latency local storage.

Features:

  • Custom built AWS Graviton2 Processor with 64-bit Arm Neoverse cores
  • Support for Enhanced Networking with Up to 25 Gbps of Network bandwidth
  • EBS-optimized by default
  • Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
  • Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
  • With M6gd instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the instance

Use Cases: Applications built on open-source software such as application servers, microservices, gaming servers, mid-size data stores, and caching fleets.

To Check the Instances types which comes under this Click here.

M5

Amazon EC2 M5 Instances are the next generation of the Amazon EC2 General Purpose compute instances. M5 instances offer a balance of compute, memory, and networking resources for a broad range of workloads. This includes web and application servers, small and mid-sized databases, cluster computing, gaming servers, caching fleets, and app development environments. Additionally, M5d, M5dn, and M5ad instances have local storage, offering up to 3.6TB of NVMe-based SSDs.

Features:

  • Up to 3.1 GHz Intel Xeon® Platinum 8175M processors with new Intel Advanced Vector Extension (AVX-512) instruction set
  • New larger instance size, m5.24xlarge, offering 96 vCPUs and 384 GiB of memory
  • Up to 25 Gbps network bandwidth using Enhanced Networking
  • Requires HVM AMIs that include drivers for ENA and NVMe
  • Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
  • Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
  • With M5d instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the M5 instance
  • New 8xlarge and 16xlarge sizes now available.

Use Cases: Small and mid-size databases, data processing tasks that require additional memory, caching fleets, and for running backend servers for SAP, Microsoft SharePoint, cluster computing, and other enterprise applications

To Check the Instances types which comes under this Click here.

M5a

M5a instances are the latest generation of General Purpose Instances powered by AMD EPYC 7000 series processors. M5a instances deliver up to 10% cost savings over comparable instance types. With M5ad instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the instance.

Features:

  • AMD EPYC 7000 series processors with an all core turbo clock speed of 2.5 GHz
  • Up to 20 Gbps network bandwidth using Enhanced Networking
  • Requires HVM AMIs that include drivers for ENA and NVMe
  • Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
  • Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
  • With M5ad instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the M5a instance

Use Cases: Small and mid-size databases, data processing tasks that require additional memory, caching fleets, and for running backend servers for SAP, Microsoft SharePoint, cluster computing, and other enterprise applications

To Check the Instances types which comes under this Click here.

M5n

M5 instances are ideal for workloads that require a balance of compute, memory, and networking resources including web and application servers, small and mid-sized databases, cluster computing, gaming servers, and caching fleet. The higher bandwidth, M5n and M5dn, instance variants are ideal for applications that can take advantage of improved network throughput and packet rate performance.

Feature:

  • 2nd generation Intel Xeon Scalable Processors (Cascade Lake) with a sustained all-core Turbo CPU frequency of 3.1 GHz and maximum single core turbo frequency of 3.5 GHz 
  • Support for the new Intel Vector Neural Network Instructions (AVX-512 VNNI) which will help speed up typical machine learning operations like convolution, and automatically improve inference performance over a wide range of deep learning workloads
  • 25 Gbps of peak bandwidth on smaller instance sizes 
  • 100 Gbps of network bandwidth on the largest instance size
  • Requires HVM AMIs that include drivers for ENA and NVMe
  • Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor 
  • Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server 
  • With M5dn instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the M5 instance

Use Cases: Web and application servers, small and mid-sized databases, cluster computing, gaming servers, caching fleets, and other enterprise applications

To Check the Instances types which comes under this Click here.

M5zn

Amazon M5zn instances are the latest addition to the M5 family and deliver the fastest Intel Xeon Scalable processors in the cloud, with an all-core turbo frequency up to 4.5 GHz. M5zn instances are an ideal fit for applications that benefit from extremely high single-thread performance and high throughput, low latency networking, such as gaming, High Performance Computing, and simulation modeling for the automotive, aerospace, energy, and telecommunication industries.

Features:

  • 2nd Generation Intel Xeon Scalable Processors (Cascade Lake) with an all-core turbo frequency up to 4.5 GHz
  • Up to 100 Gbps of network bandwidth on the largest instance size and bare metal variant
  • Dedicated performance to the Elastic Block Store 
  • Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
  • 12x and metal sizes of M5zn instances leverage the latest generation of the Elastic Network Adapter and enable consistent low latency with Elastic Fabric Adapter

Use Cases: Web and application servers, small and mid-sized databases, cluster computing, gaming servers, caching fleets, and other enterprise applications

To Check the Instances types which comes under this Click here.

M4

M4 instances provide a balance of compute, memory, and network resources, and it is a good choice for many applications.

Features:

  • 2.3 GHz Intel Xeon® E5-2686 v4 (Broadwell) processors or 2.4 GHz Intel Xeon® E5-2676 v3 (Haswell) processors
  • EBS-optimized by default at no additional cost
  • Support for Enhanced Networking
  • Balance of compute, memory, and network resources

Use Cases: Small and mid-size databases, data processing tasks that require additional memory, caching fleets, and for running backend servers for SAP, Microsoft SharePoint, cluster computing, and other enterprise applications.

To Check the Instances types which comes under this Click here.

A1

Amazon EC2 A1 instances deliver significant cost savings for scale-out and Arm-based applications such as web servers, containerized microservices, caching fleets, and distributed data stores that are supported by the extensive Arm ecosystem. A1 instances are the first EC2 instances powered by AWS Graviton Processors that feature 64-bit Arm Neoverse cores and custom silicon designed by AWS. These instances will also appeal to developers, enthusiasts, and educators across the Arm developer community. Most architecture-agnostic applications that can run on Arm cores could also benefit from A1 instances.  

Features:

  • Custom built AWS Graviton Processor with 64-bit Arm Neoverse cores
  • Support for Enhanced Networking with Up to 10 Gbps of Network bandwidth
  • EBS-optimized by default
  • Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor

Use Cases: Scale-out workloads such as web servers, containerized microservices, caching fleets, and distributed data stores, as well as development environments

To Check the Instances types which comes under this Click here.

Compute Optimized

Compute Optimized instances are ideal for compute bound applications that benefit from high performance processors. Instances belonging to this family are well suited for batch processing workloads, media transcoding, high performance web servers, high performance computing (HPC), scientific modeling, dedicated gaming servers and ad server engines, machine learning inference and other compute intensive applications.

C6g

Amazon EC2 C6g instances are powered by Arm-based AWS Graviton2 processors. They deliver up to 40% better price performance over current generation C5 instances and are ideal for running advanced compute-intensive workloads. This includes workloads such as high performance computing (HPC), batch processing, ad serving, video encoding, gaming, scientific modelling, distributed analytics, and CPU-based machine learning inference.

C6g instances are available with local NVMe-based SSD block-level storage option (C6gd) for applications that need high-speed, low latency local storage. C6g instances with 100 Gbps networking and Elastic Fabric Adapter (EFA) support, called the C6gn instances, are also available for applications that need higher networking throughput, such as high performance computing (HPC), network appliance, real-time video communication, and data analytics.

Features:

  • Custom built AWS Graviton2 Processor with 64-bit Arm Neoverse cores
  • Support for Enhanced Networking with Up to 25 Gbps of Network bandwidth
  • EBS-optimized by default
  • Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
  • With C6gd instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the instance

Use Cases: High performance computing (HPC), batch processing, ad serving, video encoding, gaming, scientific modelling, distributed analytics, and CPU-based machine learning inference.

To Check the Instances types which comes under this Click here.

C6gn

Amazon EC2 C6gn instances are powered by Arm-based AWS Graviton2 processors. They deliver up to 40% better price performance over current generation C5n instances and provide up to 100 Gbps networking and support for Elastic Fabric Adapter (EFA) for applications that need higher networking throughput, such as high performance computing (HPC), network appliance, real-time video communication, and data analytics.

Features:

  • Custom built AWS Graviton2 Processor with 64-bit Arm Neoverse cores
  • Support for Enhanced Networking with Up to 100 Gbps of Network bandwidth
  • EFA support on c6gn.16xlarge instances
  • EBS-optimized by default, 2x EBS bandwidth compared to C5n instances
  • Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor

Use Cases: High performance web servers, scientific modelling, batch processing, distributed analytics, high-performance computing (HPC), network appliance, machine/deep learning inference, ad serving, highly scalable multiplayer gaming, and video encoding.

To Check the Instances types which comes under this Click here.

C5

Amazon EC2 C5 instances deliver cost-effective high performance at a low price per compute ratio for running advanced compute-intensive workloads. This includes workloads such as high-performance web servers, high-performance computing (HPC), batch processing, ad serving, highly scalable multiplayer gaming, video encoding, scientific modelling, distributed analytics and machine/deep learning inference. The C5 instances are available with a choice of processors from Intel and AMD.

Features:

  • C5 instances offer a choice of processors based on the size of the instance.
  • New C5 and C5d 12xlarge, 24xlarge, and metal instance sizes feature custom 2nd generation Intel Xeon Scalable Processors (Cascade Lake) with a sustained all core Turbo frequency of 3.6GHz and single core turbo frequency of up to 3.9GHz.
  • Other C5 instance sizes will launch on the 2nd generation Intel Xeon Scalable Processors (Cascade Lake) or 1st generation Intel Xeon Platinum 8000 series (Skylake-SP) processor with a sustained all core Turbo frequency of up to 3.4GHz, and single core turbo frequency of up to 3.5 GHz.
  • New larger 24xlarge instance size offering 96 vCPUs, 192 GiB of memory, and optional 3.6TB local NVMe-based SSDs
  • Requires HVM AMIs that include drivers for ENA and NVMe
  • With C5d instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the C5 instance
  • Elastic Network Adapter (ENA) provides C5 instances with up to 25 Gbps of network bandwidth and up to 19 Gbps of dedicated bandwidth to Amazon EBS.
  • Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor

Use Cases: High performance web servers, scientific modelling, batch processing, distributed analytics, high-performance computing (HPC), machine/deep learning inference, ad serving, highly scalable multiplayer gaming, and video encoding.

To Check the Instances types which comes under this Click here.

C5a

C5a instances offer leading x86 price-performance for a broad set of compute-intensive workloads.

Features:

  • 2nd generation AMD EPYC 7002 series processors running at frequencies up to 3.3 GHz
  • Elastic Network Adapter (ENA) provides C5a instances with up to 20 Gbps of network bandwdith and up to 9.5 Gbps of dedicated bandwidth to Amazon EBS
  • Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
  • With C5ad instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the C5a instance

Use Cases: C5a instances are ideal for workloads requiring high vCPU and memory bandwidth such as batch processing, distributed analytics, data transformations, gaming, log analysis, web applications, and other compute-intensive workloads.

To Check the Instances types which comes under this Click here.

C5n

C5n instances are ideal for high compute applications (including High Performance Computing (HPC) workloads, data lakes, and network appliances such as firewalls and routers) that can take advantage of improved network throughput and packet rate performance. C5n instances offers up to 100 Gbps network bandwidth and increased memory over comparable C5 instances. C5n.18xlarge instances support Elastic Fabric Adapter (EFA), a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of inter-node communications, like High Performance Computing (HPC) applications using the Message Passing Interface (MPI), at scale on AWS.

Features:

  • 3.0 GHz Intel Xeon Platinum processors with Intel Advanced Vector Extension 512 (AVX-512) instruction set
  • Sustained all core Turbo frequency of up to 3.4GHz, and single core turbo frequency of up to 3.5 GHz
  • Larger instance size, c5n.18xlarge, offering 72 vCPUs and 192 GiB of memory
  • Requires HVM AMIs that include drivers for ENA and NVMe
  • Network bandwidth increases to up to 100 Gbps, delivering increased performance for network intensive applications.
  • EFA support on c5n.18xlarge instances
  • 33% higher memory footprint compared to C5 instances
  • Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor

Use Cases: High performance web servers, scientific modelling, batch processing, distributed analytics, high-performance computing (HPC), machine/deep learning inference, ad serving, highly scalable multiplayer gaming, and video encoding.

To Check the Instances types which comes under this Click here.

C4

C4 instances are optimized for compute-intensive workloads and deliver very cost-effective high performance at a low price per compute ratio.

Features:

  • High frequency Intel Xeon E5-2666 v3 (Haswell) processors optimized specifically for EC2
  • Default EBS-optimized for increased storage performance at no additional cost
  • Higher networking performance with Enhanced Networking supporting Intel 82599 VF
  • Requires Amazon VPC, Amazon EBS and 64-bit HVM AMIs

Use Cases: High performance front-end fleets, web-servers, batch processing, distributed analytics, high performance science and engineering applications, ad serving, MMO gaming, and video-encoding.

To Check the Instances types which comes under this Click here.

Memory Optimized

Memory optimized instances are designed to deliver fast performance for workloads that process large data sets in memory.

R6g

Amazon EC2 R6g instances are powered by Arm-based AWS Graviton2 processors. They deliver up to 40% better price performance over current generation R5 instances and are ideal for running memory-intensive workloads such as open-source databases, in-memory caches, and real time big data analytics. Developers can also use these instances to build Arm-based applications natively in the cloud, eliminating the need for cross-compilation and emulation, and improving time to market.

R6g instances are also available with local NVMe-based SSD block-level storage option (R6gd) for applications that need high-speed, low latency local storage.

Features:

  • Custom built AWS Graviton2 Processor with 64-bit Arm Neoverse cores
  • Support for Enhanced Networking with Up to 25 Gbps of Network bandwidth
  • EBS-optimized by default
  • Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
  • With R6gd instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the instance

Use Cases: Memory-intensive applications such as open-source databases, in-memory caches, and real time big data analytics

To Check the Instances types which comes under this Click here.

R5

Amazon EC2 R5 instances are the next generation of memory optimized instances for the Amazon Elastic Compute Cloud. R5 instances are well suited for memory intensive applications such as high-performance databases, distributed web scale in-memory caches, mid-size in-memory databases, real time big data analytics, and other enterprise applications. Additionally, you can choose from a selection of instances that have options for local NVMe storage, EBS optimized storage (up to 60 Gbps), and networking (up to 100 Gbps).

Features:

  • Up to 3.1 GHz Intel Xeon® Platinum 8000 series processors (Skylake-SP or Cascade Lake) with new Intel Advanced Vector Extension (AVX-512) instruction set
  • Up to 768 GiB of memory per instance
  • Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
  • With R5d instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the R5 instance
  • New 8xlarge and 16xlarge sizes now available.

Use Cases: R5 instances are well suited for memory intensive applications such as high performance databases, distributed web scale in-memory caches, mid-size in-memory databases, real time big data analytics, and other enterprise applications.

To Check the Instances types which comes under this Click here.

R5a

R5a instances are the latest generation of Memory Optimized instances ideal for memory-bound workloads and are powered by AMD EPYC 7000 series processors. R5a instances deliver up to 10% lower cost per GiB memory over comparable instances.

Features:

  • AMD EPYC 7000 series processors with an all core turbo clock speed of 2.5 GHz
  • Up to 20 Gbps network bandwidth using Enhanced Networking
  • Up to 768 GiB of memory per instance
  • Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
  • Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
  • With R5ad instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the R5a instance

Use Cases: R5a instances are well suited for memory intensive applications such as high performance databases, distributed web scale in-memory caches, mid-size in-memory databases, real time big data analytics, and other enterprise applications.

To Check the Instances types which comes under this Click here.

R5b

Amazon EC2 R5b instances are EBS-optimized variants of memory-optimized R5 instances. R5b instances increase EBS performance by 3x compared to same-sized R5 instances. R5b instances deliver up to 60 Gbps bandwidth and 260K IOPS of EBS performance, the fastest block storage performance on EC2.

Features:

  • Custom 2nd generation Intel Xeon Scalable Processors (Cascade Lake) with a sustained all-core Turbo CPU frequency of 3.1 GHz and maximum single core turbo frequency of 3.5 GHz
  • Up to 96 vCPUs, Up to 768 GiB of Memory
  • Up to 25 Gbps network bandwidth
  • Up to 60 Gbps of EBS bandwidth

Use Cases: High performance databases, distributed web scale in-memory caches, mid-size in-memory databases, real time big data analytics.

To Check the Instances types which comes under this Click here.

R5n

R5 instances are ideal for memory-bound workloads including high performance databases, distributed web scale in-memory caches, mid-sized in-memory database, real time big data analytics, and other enterprise applications. The higher bandwidth, R5n and R5dn, instance variants are ideal for applications that can take advantage of improved network throughput and packet rate performance.

Features:

  • 2nd generation Intel Xeon Scalable Processors (Cascade Lake) with a sustained all-core Turbo CPU frequency of 3.1 GHz and maximum single core turbo frequency of 3.5 GHz
  • Support for the new Intel Vector Neural Network Instructions (AVX-512 VNNI) which will help speed up typical machine learning operations like convolution, and automatically improve inference performance over a wide range of deep learning workloads
  • 25 Gbps of peak bandwidth on smaller instance sizes
  • 100 Gbps of network bandwidth on the largest instance size
  • Requires HVM AMIs that include drivers for ENA and NVMe
  • Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
  • Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
  • With R5dn instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the R5 instance

Use Cases: High performance databases, distributed web scale in-memory caches, mid-sized in-memory database, real time big data analytics and other enterprise applications

To Check the Instances types which comes under this Click here.

R4

R4 instances are optimized for memory-intensive applications and offer better price per GiB of RAM than R3.

Features:

  • High Frequency Intel Xeon E5-2686 v4 (Broadwell) processors
  • DDR4 Memory
  • Support for Enhanced Networking

Use Cases: High performance databases, data mining & analysis, in-memory databases, distributed web scale in-memory caches, applications performing real-time processing of unstructured big data, Hadoop/Spark clusters, and other enterprise applications.

To Check the Instances types which comes under this Click here.

X2gd

Amazon EC2 X2gd instances are the next generation of memory-optimized instances powered by AWS-designed, Arm-based AWS Graviton2 processors. They deliver up to 55% better price performance compared to current generation x86-based X1 instances. They also offer twice the memory per vCPU compared to R6g/R5 instances and the lowest cost per GiB of memory in Amazon EC2. The higher performance and additional memory of X2gd instances enable customers to run memory-intensive workloads such as in-memory databases (Redis, Memcached), relational databases (MySQL, PostGreSQL), electronic design automation (EDA) workloads, real-time analytics, and real-time caching servers. Additionally, as more customers run containers on AWS for application portability and infrastructure efficiency, X2gd instances make it possible for them to bundle more memory-intensive containerized applications on a single instance to optimize their compute infrastructure.

X2gd instances include local NVMe-based SSD block-level storage for applications that need high-speed, low latency access to data sets for caching and real-time analytics.

Features:

  • Custom built AWS Graviton2 Processor with 64-bit Arm Neoverse cores
  • Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
  • Support for Enhanced Networking with up to 25 Gbps of network bandwidth
  • Local NVMe-based SSD storage provide high speed, low latency access to in-memory data
  • EBS-optimized by default

Use Cases: Memory-intensive workloads such as open-source databases (MySQL, MariaDB, and PostgreSQL), in-memory caches (Redis, KeyDB, Memcached), electronic design automation (EDA) workloads, real-time analytics, and real-time caching servers.

To Check the Instances types which comes under this Click here.

X1e

Amazon EC2 X1e instances are a part of the Amazon EC2 Memory Optimized instance family, designed for running high-performance databases, in-memory workloads such as SAP HANA, and other memory intensive enterprise applications in the AWS Cloud. X1e instances are powered by four Intel® Xeon® E7 8880 v3 processors offering up to 128 vCPUs and 3,904 GiB of DRAM-based memory.

The x1e.32xlarge instance is certified by SAP for workloads such as the next-generation Business Suite S/4HANA, Business Suite on HANA (SoH), Business Warehouse on HANA (BW), and Data Mart Solutions on HANA on the AWS Cloud.

Features:

  • High frequency Intel Xeon E7-8880 v3 (Haswell) processors
  • One of the lowest prices per GiB of RAM
  • Up to 3,904 GiB of DRAM-based instance memory
  • SSD instance storage for temporary block-level storage and EBS-optimized by default at no additional cost
  • Ability to control processor C-state and P-state configurations on x1e.32xlarge, x1e.16xlarge and x1e.8xlarge instances

Use Cases: High performance databases, in-memory databases (e.g. SAP HANA) and memory intensive applications. x1e.32xlarge instance certified by SAP to run next-generation Business Suite S/4HANA, Business Suite on HANA (SoH), Business Warehouse on HANA (BW), and Data Mart Solutions on HANA on the AWS cloud.

To Check the Instances types which comes under this Click here.

X1

X1 Instances are a part of the Amazon EC2 memory-optimized instance family and are designed for running large-scale and in-memory applications in the AWS Cloud. X1 instances offer up to 1,952 GiB of DRAM based memory. Each X1 instance is powered by four Intel® Xeon® E7 8880 v3 (codenamed Haswell) processors and offers up to 128 vCPUs.

Compared to other EC2 instances, X1 instances have one of the lowest price per GiB of RAM, and are ideal for running in-memory databases like SAP HANA, big data processing engines like Apache Spark or Presto, and high-performance computing (HPC) applications. X1 instances are certified by SAP to run production environments of the next-generation Business Suite S/4HANA, Business Suite on HANA (SoH), Business Warehouse on HANA (BW), and Data Mart Solutions on HANA on the AWS Cloud.

Features:

  • High frequency Intel Xeon E7-8880 v3 (Haswell) processors
  • One of the lowest prices per GiB of RAM
  • Up to 1,952 GiB of DRAM-based instance memory
  • SSD instance storage for temporary block-level storage and EBS-optimized by default at no additional cost
  • Ability to control processor C-state and P-state configuration

Use Cases: In-memory databases (e.g. SAP HANA), big data processing engines (e.g. Apache Spark or Presto), high performance computing (HPC). Certified by SAP to run Business Warehouse on HANA (BW), Data Mart Solutions on HANA, Business Suite on HANA (SoH), Business Suite S/4HANA.

To Check the Instances types which comes under this Click here.

High Memory

EC2 High Memory instances offer 6, 9, 12, 18, and 24 TB of memory in an instance. These instances are purpose-built to run large in-memory databases, including production deployments of the SAP HANA in-memory database, in the cloud. EC2 High Memory instances allow you to run large in-memory databases and business applications that rely on these databases in the same, shared Amazon Virtual Private Cloud (VPC), reducing the management overhead associated with complex networking and ensuring predictable performance.

EC2 High Memory instances are EBS-Optimized by default, and offer up to 38 Gbps of dedicated storage bandwidth to encrypted and unencrypted EBS volumes. These instances deliver high networking throughput and low-latency with up to 100 Gbps of aggregate network bandwidth using Amazon Elastic Network Adapter (ENA)-based Enhanced Networking. EC2 High Memory instances with 6 TB, 9 TB, and 12 TB are powered by an 8-socket platform with Intel® Xeon® Platinum 8176M (Skylake) processors. EC2 High Memory instances with 18 TB and 24 TB are the first Amazon EC2 instances powered by an 8-socket platform with 2nd Generation Intel® Xeon® Scalable (Cascade Lake) processors.

Amazon EC2 High Memory instances are certified by SAP for running Business Suite on HANA, the next-generation Business Suite S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments.

Features:

  • Now available in both bare metal and virtualized memory
  • From 6 to 24 TiB of instances memory
  • Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
  • Virtualized instances are available with On-Demand and with 1-year and 3-year Savings Plan purchase options

Use Cases: Ideal for running large enterprise databases, including production installations of SAP HANA in-memory database in the cloud. Certified by SAP for running Business Suite on HANA, the next-generation Business Suite S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments.

To Check the Instances types which comes under this Click here.

z1d

Amazon EC2 z1d instances deliver high single thread performance due to a custom Intel® Xeon® Scalable processor with a sustained all core frequency of up to 4.0 GHz. z1d provides both high compute performance and high memory, which is ideal for electronic design automation (EDA), computational lithography, financial, actuarial, data analytics, and CPU or memory-bound relational database workloads with high per-core licensing costs. z1d is also ideal for applications requiring high single-threaded performance and high memory usage.

Many workloads such as EDA and relational databases have high per core software licensing fees. Semiconductor firms who use EDA software for the design and verification of integrated circuits have software licensing costs that account for a majority of their TCO. With z1d instances, semiconductor firms are able to run more EDA jobs per core, amortizing their annual license subscription over more jobs and reducing their design and verification time. Similarly, customers typically pay per processor licensing fees for relational database workloads and can lower licensing costs with a faster processor accompanied by a large amount of memory.

Features:

  • A custom Intel® Xeon® Scalable processor with a sustained all core frequency of up to 4.0 GHz with new Intel Advanced Vector Extension (AVX-512) instruction set
  • Up to 1.8TB of instance storage
  • High memory with up to 384 GiB of RAM
  • Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
  • With z1d instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the z1d instance

Use Cases: Ideal for electronic design automation (EDA) and certain relational database workloads with high per-core licensing costs.

To Check the Instances types which comes under this Click here.

Accelerated Computing

Accelerated computing instances use hardware accelerators, or co-processors, to perform functions, such as floating point number calculations, graphics processing, or data pattern matching, more efficiently than is possible in software running on CPUs.

P4

Amazon EC2 P4d instances deliver the highest performance for machine learning (ML) training and high performance computing (HPC) applications in the cloud. P4d instances are powered by the latest NVIDIA A100 Tensor Core GPUs and deliver industry-leading high throughput and low latency networking. These instances are the first in the cloud to support 400 Gbps instance networking. P4d instances provide up to 60% lower cost to train ML models, including an average of 2.5x better performance for deep learning models compared to previous generation P3 and P3dn instances.

Amazon EC2 P4d instances are deployed in hyperscale clusters called EC2 UltraClusters that are comprised of the highest performance compute, networking, and storage in the cloud. Each EC2 UltraCluster is one of the most powerful supercomputers in the world, enabling customers to run their most complex multi-node ML training and distributed HPC workloads. Customers can easily scale from a few to thousands of NVIDIA A100 GPUs in the EC2 UltraClusters based on their ML or HPC project needs.

Researchers, data scientists, and developers can leverage P4d instances to train ML models for use cases such as natural language processing, object detection and classification, and recommendation engines, as well as run HPC applications such as pharmaceutical discovery, seismic analysis, and financial modeling. Unlike on-premises systems, customers can access virtually unlimited compute and storage capacity, scale their infrastructure based on business needs, and spin up a multi-node ML training job or a tightly coupled distributed HPC application in minutes, without any setup or maintenance costs.

Features:

  • Up to 8 NVIDIA A100 Tensor Core GPUs
  • 400 Gbps instance networking with support for Elastic Fabric Adapter (EFA) and NVIDIA GPUDirect RDMA (remote direct memory access)
  • 600 GB/s peer to peer GPU communication with NVIDIA NVSwitch
  • Deployed in EC2 UltraClusters consisting of more than 4,000 NVIDIA A100 Tensor Core GPUs, Petabit-scale networking, and scalable low latency storage with Amazon FSx for Lustre
  • 3.0 GHz 2nd Generation Intel Xeon Scalable (Cascade Lake) processors

Use Cases: Machine learning, high performance computing, computational fluid dynamics, computational finance, seismic analysis, speech recognition, autonomous vehicles, and drug discovery.

To Check the Instances types which comes under this Click here.

P3

Amazon EC2 P3 instances deliver high performance compute in the cloud with up to 8 NVIDIA® V100 Tensor Core GPUs and up to 100 Gbps of networking throughput for machine learning and HPC applications. These instances deliver up to one petaflop of mixed-precision performance per instance to significantly accelerate machine learning and high performance computing applications. Amazon EC2 P3 instances have been proven to reduce machine learning training times from days to minutes, as well as increase the number of simulations completed for high performance computing by 3-4x.

With up to 4x the network bandwidth of P3.16xlarge instances, Amazon EC2 P3dn.24xlarge instances are the latest addition to the P3 family, optimized for distributed machine learning and HPC applications. These instances provide up to 100 Gbps of networking throughput, 96 custom Intel® Xeon® Scalable (Skylake) vCPUs, 8 NVIDIA® V100 Tensor Core GPUs with 32 GB of memory each, and 1.8 TB of local NVMe-based SSD storage. P3dn.24xlarge instances also support Elastic Fabric Adapter (EFA) which accelerates distributed machine learning applications that use NVIDIA Collective Communications Library (NCCL). EFA can scale to thousands of GPUs, significantly improving the throughput and scalability of deep learning training models, which leads to faster results.

Features:

  • Up to 8 NVIDIA Tesla V100 GPUs, each pairing 5,120 CUDA Cores and 640 Tensor Cores
  • High frequency Intel Xeon E5-2686 v4 (Broadwell) processors for p3.2xlarge, p3.8xlarge, and p3.16xlarge.
  • High frequency 2.5 GHz (base) Intel Xeon 8175M processors for p3dn.24xlarge.
  • Supports NVLink for peer-to-peer GPU communication
  • Provides up to 100 Gbps of aggregate network bandwidth.
  • EFA support on p3dn.24xlarge instances

Use Cases: Machine/Deep learning, high performance computing, computational fluid dynamics, computational finance, seismic analysis, speech recognition, autonomous vehicles, drug discovery.

To Check the Instances types which comes under this Click here.

P2

P2 instances are intended for general-purpose GPU compute applications.

Features:

  • High frequency Intel Xeon E5-2686 v4 (Broadwell) processors
  • High-performance NVIDIA K80 GPUs, each with 2,496 parallel processing cores and 12GiB of GPU memory
  • Supports GPUDirect™ for peer-to-peer GPU communications
  • Provides Enhanced Networking using Elastic Network Adapter (ENA) with up to 25 Gbps of aggregate network bandwidth within a Placement Group
  • EBS-optimized by default at no additional cost

Use Cases: Machine learning, high performance databases, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, rendering, and other server-side GPU compute workloads.

To Check the Instances types which comes under this Click here.

Inf1

Businesses across a diverse set of industries are looking at AI-powered transformation to drive business innovation, improve customer experience and process improvements. Machine learning models that power AI applications are becoming increasingly complex resulting in rising underlying compute infrastructure costs. Up to 90% of the infrastructure spend for developing and running ML applications is often on inference. Customers are looking for cost-effective infrastructure solutions for deploying their ML applications in production.

Amazon EC2 Inf1 instances deliver high-performance ML inference at the lowest cost in the cloud. They deliver up to 2.3x higher throughput and up to 70% lower cost per inference than comparable current generation GPU-based Amazon EC2 instances. Inf1 instances are built from the ground up to support machine learning inference applications. They feature up to 16 AWS Inferentia chips, high-performance machine learning inference chips designed and built by AWS. Additionally, Inf1 instances include 2nd generation Intel® Xeon® Scalable processors and up to 100 Gbps networking to deliver high throughput inference.

Customers can use Inf1 instances to run large scale machine learning inference applications such as search, recommendation engines, computer vision, speech recognition, natural language processing, personalization, and fraud detection, at the lowest cost in the cloud.

Developers can deploy their machine learning models to Inf1 instances by using the AWS Neuron SDK, which is integrated with popular machine learning frameworks such as TensorFlow, PyTorch and MXNet. They can continue using the same ML workflows and seamlessly migrate applications on to Inf1 instances with minimal code changes and with no tie-in to vendor specific solutions.

Get started easily with Inf1 instances using Amazon SageMaker, AWS Deep Learning AMIs that come pre-configured with Neuron SDK, or using Amazon ECS or Amazon EKS for containerized ML applications.

Features:

  • Up to 16 AWS Inferentia Chips
  • AWS Neuron SDK
  • High frequency 2nd Gen Intel® Xeon® Scalable processors
  • Up to 100 Gbps networking

Use Cases: Recommendation engines, forecasting, image and video analysis, advanced text analytics, document analysis, voice, conversational agents, translation, transcription, and fraud detection.

To Check the Instances types which comes under this Click here.

G4dn

G4dn instances, powered by NVIDIA T4 GPUs, are the lowest cost GPU-based instances in the cloud for machine learning inference and small scale training. They also provide high performance and are a cost-effective solution for graphics applications that are optimized for NVIDIA GPUs using NVIDIA libraries such as CUDA, CuDNN, and NVENC. They provide up to 8 NVIDIA T4 GPUs, 96 vCPUs, 100 Gbps networking, and 1.8 TB local NVMe-based SSD storage and are also available as bare metal instances.

Features:

  • 2nd Generation Intel Xeon Scalable (Cascade Lake) processors
  • NVIDIA T4 Tensor Core GPUs
  • Up to 100 Gbps of networking throughput
  • Up to 1.8 TB of local NVMe storage

Use Cases: Machine learning inference for applications like adding metadata to an image, object detection, recommender systems, automated speech recognition, and language translation. G4 instances also provide a very cost-effective platform for building and running graphics-intensive applications, such as remote graphics workstations, video transcoding, photo-realistic design, and game streaming in the cloud.  

To Check the Instances types which comes under this Click here.

G4ad

G4ad instances, powered by AMD Radeon Pro V520 GPUs, provide the best price performance for graphics intensive applications in the cloud. These instances offer up to 45% better price performance compared to G4dn instances, which were already the lowest cost instances in the cloud, for graphics applications such as remote graphics workstations, game streaming, and rendering that leverage industry-standard APIs such as OpenGL, DirectX, and Vulkan. They provide up to 4 AMD Radeon Pro V520 GPUs, 64 vCPUs, 25 Gbps networking, and 2.4 TB local NVMe-based SSD storage.

Features:

  • Second generation AMD EPYC processors
  • AMD Radeon Pro V520 GPUs
  • Up to 2.4 TB of local NVMe storage

Use Cases: Graphics-intensive applications, such as remote graphics workstations, video transcoding, photo-realistic design, and game streaming in the cloud.

To Check the Instances types which comes under this Click here.

G3

Amazon EC2 G3 instances are the latest generation of Amazon EC2 GPU graphics instances that deliver a powerful combination of CPU, host memory, and GPU capacity. G3 instances are ideal for graphics-intensive applications such as 3D visualizations, mid to high-end virtual workstations, virtual application software, 3D rendering, application streaming, video encoding, gaming, and other server-side graphics workloads.

G3 instances provides access to NVIDIA Tesla M60 GPUs, each with up to 2,048 parallel processing cores, 8 GiB of GPU memory, and a hardware encoder supporting up to 10 H.265 (HEVC) 1080p30 streams and up to 18 H.264 1080p30 streams. With the latest driver releases, these GPUs provide support for OpenGL, DirectX, CUDA, OpenCL, and Capture SDK (formerly known as GRID SDK).

Features:

  • High frequency Intel Xeon E5-2686 v4 (Broadwell) processors
  • NVIDIA Tesla M60 GPUs, each with 2048 parallel processing cores and 8 GiB of video memory
  • Enables NVIDIA GRID Virtual Workstation features, including support for 4 monitors with resolutions up to 4096×2160. Each GPU included in your instance is licensed for one “Concurrent Connected User”
  • Enables NVIDIA GRID Virtual Application capabilities for application virtualization software like Citrix XenApp Essentials and VMware Horizon, supporting up to 25 concurrent users per GPU
  • Each GPU features an on-board hardware video encoder designed to support up to 10 H.265 (HEVC) 1080p30 streams and up to 18 H.264 1080p30 streams, enabling low-latency frame capture and encoding, and high-quality interactive streaming experiences
  • Enhanced Networking using the Elastic Network Adapter (ENA) with 25 Gbps of aggregate network bandwidth within a Placement Group

Use Cases: 3D visualizations, graphics-intensive remote workstation, 3D rendering, application streaming, video encoding, and other server-side graphics workloads.

To Check the Instances types which comes under this Click here.

F1

Amazon EC2 F1 instances use FPGAs to enable delivery of custom hardware accelerations. F1 instances are easy to program and come with everything you need to develop, simulate, debug, and compile your hardware acceleration code, including an FPGA Developer AMI and supporting hardware level development on the cloud. Using F1 instances to deploy hardware accelerations can be useful in many applications to solve complex science, engineering, and business problems that require high bandwidth, enhanced networking, and very high compute capabilities. Examples of target applications that can benefit from F1 instance acceleration are genomics, search/analytics, image and video processing, network security, electronic design automation (EDA), image and file compression and big data analytics.

F1 instances provide diverse development environments: from low-level hardware developers to software developers who are more comfortable with C/C++ and openCL environments (available on our GitHub). Once your FPGA design is complete, you can register it as an Amazon FPGA Image (AFI), and deploy it to your F1 instance in just a few clicks. You can reuse your AFIs as many times as you like, and across as many F1 instances as you like. There is no software charge for the development tools when using the FPGA developer AMI and you can program the FPGAs on your F1 instance as many times as you like with no additional fees.

Instances Features:

  • High frequency Intel Xeon E5-2686 v4 (Broadwell) processors
  • NVMe SSD Storage
  • Support for Enhanced Networking

FPGA Features:

  • Xilinx Virtex UltraScale+ VU9P FPGAs
  • 64 GiB of ECC-protected memory on 4x DDR4
  • Dedicated PCI-Express x16 interface
  • Approximately 2.5 million logic elements
  • Approximately 6,800 Digital Signal Processing (DSP) engines
  • FPGA Developer AMI

Use Cases: Genomics research, financial analytics, real-time video processing, big data search and analysis, and security.

To Check the Instances types which comes under this Click here.

Storage Optimized

Storage optimized instances are designed for workloads that require high, sequential read and write access to very large data sets on local storage. They are optimized to deliver tens of thousands of low-latency, random I/O operations per second (IOPS) to applications.

I3

Amazon EC2 I3 instances are the next generation of Storage Optimized instances for high transaction, low latency workloads. I3 instances offer the best price per I/O performance for workloads such as NoSQL databases, in-memory databases, data warehousing, Elasticsearch, and analytics workloads.

Features:

  • High Frequency Intel Xeon E5-2686 v4 (Broadwell) Processors with base frequency of 2.3 GHz
  • Up to 25 Gbps of network bandwidth using Elastic Network Adapter (ENA)-based Enhanced Networking
  • High Random I/O performance and High Sequential Read throughput
  • Support bare metal instance size for workloads that benefit from direct access to physical processor and memory

Use Cases: NoSQL databases (e.g. Cassandra, MongoDB, Redis), in-memory databases (e.g. Aerospike), scale-out transactional databases, data warehousing, Elasticsearch, analytics workloads.

To Check the Instances types which comes under this Click here.

I3en

Amazon EC2 I3en instances offer the lowest price per GB of SSD instance storage on Amazon EC2 and are designed for data-intensive workloads such as relational and NoSQL databases, distributed file systems, search engines, and data warehousing. With up to 60 TB of low latency Non-Volatile Memory Express (NVMe) SSD instance storage, I3en instances are optimized for applications requiring high random I/O access to large amounts of data. I3en instances also come with up to 100 Gbps networking bandwidth, 96 vCPUs, and 768 GiB of memory. Customers can enable Elastic Fabric Adapter (EFA) on I3en for low and consistent network latency. I3en instances feature either the 1st or 2nd generation Intel® Xeon® Scalable processor (Skylake or Cascade Lake) with a sustained all core Turbo CPU clock speed of up to 3.1 GHz.

Customers are able to choose from seven different instance sizes to match the price, performance, and storage requirements of their application. I3en instances can deliver up to 2 million random IOPS at 4 KB block sizes and up to 16 GB/s of sequential disk throughput.

Features:

  • Up to 60 TB of NVMe SSD instance storage
  • Up to 100 Gbps of network bandwidth using Elastic Network Adapter (ENA)-based Enhanced Networking
  • High random I/O performance and high sequential disk throughput
  • Up to 3.1 GHz Intel® Xeon® Scalable (Skylake) processors with new Intel Advanced Vector Extension (AVX-512) instruction set
  • Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
  • Support bare metal instance size for workloads that benefit from direct access to physical processor and memory
  • Support for Elastic Fabric Adapter on i3en.24xlarge

Use cases: NoSQL databases (e.g. Cassandra, MongoDB, Redis), in-memory databases (e.g. Aerospike), scale-out transactional databases, distributed file systems, data warehousing, Elasticsearch, analytics workloads.

To Check the Instances types which comes under this Click here.

D2

D2 instances feature up to 48 TB of HDD-based local storage, deliver high disk throughput, and offer the lowest price per disk throughput performance on Amazon EC2.

Features:

  • High-frequency Intel Xeon E5-2676 v3 (Haswell) processors
  • HDD storage
  • Consistent high performance at launch time
  • High disk throughput
  • Support for Enhanced Networking

Use Cases: Massively Parallel Processing (MPP) data warehousing, MapReduce and Hadoop distributed computing, distributed file systems, network file systems, log or data-processing applications.

To Check the Instances types which comes under this Click here.

D3

Amazon EC2 D3 instances are optimized for applications that require high sequential I/O performance and disk throughput. D3 instances represent an optimal upgrade path for workloads running on D2 instances that need additional compute and network performance at a lower price/TB.

Features:

  • Up to 48 TB of HDD instance storage
  • Up to 45% higher read and write disk throughput than EC2 D2 instances
  • Powered by the AWS Nitro System
  • Up to 3.1 GHz 2nd Generation Intel® Xeon® Scalable (Cascade Lake) processors with new Intel Advanced Vector Extension (AVX-512) instruction set

Use Cases: Distributed File Systems (e.g., HDFS, MapReduce File Systems), Big Data analytical workloads (e.g., Elastic MapReduce, Spark, Hadoop), Massively Parallel Processing (MPP) Datawarehouse (e.g. Redshift, HP Vertica), Log or data processing applications (e.g., Kafka, Elastic Search)

To Check the Instances types which comes under this Click here.

D3en

Amazon EC2 D3en instances are optimized for applications that require high sequential I/O performance, disk throughput, and low cost storage for very large data sets. D3en instances offer the lowest dense storage costs amongst all cloud offerings.

Features:

  • Up to 336 TB of HDD instance storage
  • Up to 75 Gbps of network bandwidth
  • Up to 2x higher read and write disk throughput than EC2 D2 instances
  • Powered by the AWS Nitro System
  • Up to 3.1 GHz 2nd Generation Intel® Xeon® Scalable (Cascade Lake) processors with new Intel Advanced Vector Extension (AVX-512) instruction set

Use Cases: Multi-node file storage systems such as Lustre, BeeGFS, GPFS, VxCFS, and GFS2. High Capacity data lakes with consistent sequential I/O performance

To Check the Instances types which comes under this Click here.

H1

Amazon EC2 H1 instances are a new generation of Amazon EC2 Storage Optimized instances designed for applications that require low cost, high disk throughput and high sequential disk I/O access to very large data sets. Offering the best price/performance in the magnetic disk storage EC2 instance family, H1 instances are ideal for data-intensive workloads such as MapReduce-based workloads, distributed file systems, such as HDFS and MapR-FS, network file systems, log or data processing applications such as Apache Kafka, and big data workload clusters.

Features:

  • Powered by 2.3 GHz Intel® Xeon® E5 2686 v4 processors (codenamed Broadwell)
  • Up to 16TB of HDD storage
  • High disk throughput
  • ENA enabled Enhanced Networking up to 25 Gbps

Use Cases: MapReduce-based workloads, distributed file systems such as HDFS and MapR-FS, network file systems, log or data processing applications such as Apache Kafka, and big data workload clusters.

To Check the Instances types which comes under this Click here.

Amazon EC2 Instance Types in Single Page

Top

We hope this blog has helped you in getting the details of the instance type in an single page.

If you want to check the Instance types which are available click here.

You can also refer other blogs on AWS at link

And also if you required any technology you want to learn, let us know below we will publish them in our site http://tossolution.com/

Like our page in Facebook and follow us for New technical information.

References are taken from aws.amazon.com

This Instance type is as of June 2021 on AWS Site

Leave a Reply

Your email address will not be published. Required fields are marked *