Friday, May 1, 2020

AWS Cloud Security Best Practices

A large part of cloud architecture and application development is creating modular systems that can communicate with one another. Thus, the goal of cloud security is to protect the data that exists within the cloud. 
AWS provides cloud services that allow users to create secure applications within the cloud. These services offer 
  • application authentication, 
  • encryption methods, 
  • private key value stores, 
  • application firewalls, and 
  • many other common security features. 
Security in the cloud is important as it allows applications to safely store data, such as payment information. It will enable applications to scale to an enterprise level, and it can meet compliance requirements for applications in industries, such as finance or health care. However, to confidently deploy secure applications in the cloud with state-of-the-art security measures, proper knowledge of AWS service offerings is important. AWS offers services related to cloud security, identity, and compliance. 
  • Identity and Access Manager (IAM), can be used to create AWS users and managed access to resources in the cloud. 
  • Virtual private cloud, or VPC, is a virtual network that contains application resources. 
  • Key management service, or KMS, allows you to easily create and manage keys used for data encryption within your applications. 
  • CloudTrail is a service that provides a complete audit trail of any actions taken by a user within your AWS environment. 
  • CloudWatch provides a scalable logging solution for applications in AWS services that can be used to create alarms for potential security threats or application failures.
  • Amazon Inspector is a security assessment service that can scan your cloud environment for potential security threats and application vulnerabilities. Within Identity and Access management, every AWS account has a root user. The root user has full AWS access and can even deactivate the account or decommission application stacks. It is strongly recommended by AWS that the root user is not used for everyday tasks and is only used to create the first IAM user. IAM users are accounts that are provided access to AWS. For example, these users can represent members of a development team or even your infrastructure admins. IAM users must have access to specific AWS resources in order to interact with them. To provide users access, you can provide that user an IAM policy. A policy is an object that can be assigned to an identity to provide their permissions. IAM roles are used to provide entities access to cloud resources. IAM roles have an IAM policy or multiple policies associated to them that indicate what resources that role has access to, for example an IAM role that has a policy allowing access to an AWS DynamoDB database that runs your back end application code. This setup will provide an application's back end the ability to communicate with the database securely. IAM users, by default, are not provided access to any AWS resources. It is important to attach policies to users so that they can gain access to AWS resources they need to in order to perform their daily job tasks. If an AWS user is not provided explicit access to cloud resources through an IAM policy, that user will not have access to any resources. As a security expert, it's important to abide by the principle of least privilege, which means that IAM users should only be provided access to the resources needed to perform their tasks and nothing more. Doing so creates a more secure cloud environment.
Creating AWS Users with Identity and Access Management (IAM)To see this in action, we are going to log in to the AWS console as the root user to create the first IAM account. We will learn how to set up a new account and enable multi-factor authentication for added security to our environment. Remember, policies allow access to AWS resources. Our user will need to access an IAM policy in order to access any resource within the AWS environment. To accomplish this, we will create a new IAM user using the root account, create a new IAM user group for that user, and attach a policy to that user group to access AWS S3, then log in as the new user and access S3 directly. When logged in to the AWS Management Console as the root user, you can scroll down through the AWS service offerings to find Identity and Access Manager underneath the Security, Identity, and Compliance list. In the IAM dashboard, you can create and manage access to all of your users within your AWS account. Because this is a new account with no users, we're going to select the Users button on the left-hand side of the dashboard to create our first IAM user account. In the IAM users dashboard, we'll select Add user to create our first user account. We'll create the user with the name of John Smith and give that user access to the AWS Management Console. In the Permissions page, we're going to select to create a new group of IAM users. We'll set the group name as Developers, and then we'll scroll down to add a policy to our IAM user group. We'll search for the S3 full access policy and select to give that IAM policy to our user group. This means that all IAM users assigned to our Developers group will have full access to AWS's S3 service offering. We'll click Next to review the user that we've created, and we can now view that John Smith is an IAM user in the Developers group. Now that we've reviewed everything looks okay, we'll click Create user to create our new user account. In the IAM dashboard, we can see that John Smith is now one of the available users. We'll select John Smith to view the user summary page. From here, we can view the user's security credentials and assign a new password that's custom for this account. We'll set the new password as changeit and then select the checkbox to make sure the user is forced to create a new password upon login. This is an important security feature because it forces all IAM users to have their own unique passcode. To log in with our new user account, we can select the Console sign-in link from the AWS dashboard and enter that into a browser window. Once the login page loads, we'll enter the IAM username of JohnSmith and our user's password of changeit into the Password field and select the Sign In button. Remember, earlier we specified for all of our users to create a new password upon login. So we'll enter our current password of changeit and create a new user password. Now we have logged into the AWS Management Console as our new user, John Smith, and can view all the AWS service offerings. When we select S3, we can see that our user John Smith has access to the S3 service. This is because we assigned John Smith to the Developers IAM group earlier, which had the S3 full access policy attached to it. If we go back to the AWS Management Console home screen and attempt to view another service that our user doesn't have access to, such as identity and access management, the console will display red error messages indicating that we don't have sufficient IAM privileges to view this. If we wanted this user account to have permissions to view another AWS service, we would have to associate an IAM policy for that service to our user account.
Overview of Additional AWS Services Used for SecurityA virtual private cloud, or VPC, is a provisioned section of the AWS Cloud in which you can use to launch resources. Your VPC will essentially be a virtual network that contains all of the cloud resources you will need to run your application. An allocated range of IP addresses for your application will be defined in a logical division of your network called a subnet. Subnets can be divided into both public and private networks. Public networks are used for resources that need to communicate to the internet, whereas private networks do not require an internet connection. For example, a database used for your applications should be hosted in the private subnet so that it is protected from potentially malicious traffic coming from the internet. Each VPC, by default, includes an internet gateway. Like the name indicates, an internet gateway allows instances in your VPC the ability to connect to the internet. For instances in which we want the ability to initiate outbound internet connections while restricting any inbound internet connections, we can use a network address translation gateway or a NAT gateway. A typical VPC consists of at least two subnets, one public and one private. Our private subnet will consist of resources that don't need to be publically accessible. An industry security standard is to host your database or any background services that shouldn't be accessible from the internet in this subnet. Our public subnet will host any resources that we want to be publically accessible through the internet. Our NAT gateway will be hosted in the public subnet so that instances in our private subnet can access the internet through it. And our application instances will be in this subnet because they will require inbound access through the internet when our application is used. A router or route table will live inside our VPC in order to facilitate internal and external traffic within our cloud network. And lastly, our router will have a connection to our internet gateway so that our VPC has stable and secure connectivity to the internet. It is important when creating applications in AWS that you logically place resources in the correct subnet and restrict any unauthorized access so that your data and your application are secure. Key Management Service, or KMS, is an AWS service that creates and manages keys used for encryption of private data. The primary resource in KMS is the customer master key. These keys are used to directly encrypt and decrypt data. AWS provides the ability to manage these keys yourself. Or you can let AWS manage your keys for you. All KMS keys are protected using hardware security modules that ensure none of the keys leave the hardware modules unencrypted. An online banking application is a great example of an application that would require encrypted data. The application would store sensitive personal information, such as bank account numbers, social security numbers, and account balances that would require encryption to meet security standards and compliance regulations. Typically when encrypted data is stored in a database, it can be retrieved via an authenticated API. An API hosted on an AWS EC2 instance would retrieve encrypted cipher text from a database during execution. The application then would call KMS to retrieve the applicable customer master key for the application. Using the key, the application could decrypt the data into plaintext format. The decrypted data would then be returned through the initial API call to be displayed in the user interface of the application. In this example, the EC2 instance would require IAM policies in order to securely communicate with the database and the Key Management Service. For security reasons, it is always recommended that all API calls that retrieve encrypted data require authentication and restrict private access to unauthorize users. CloudWatch is an AWS service that provides real-time logging for your AWS resources and applications. Logs in CloudWatch are organized into containers called namespaces. Namespaces can represent a specific AWS resource, such as EC2 or an entire application. CloudWatch log events contain data points called metrics. Metrics can be used to monitor specific application information or information regarding cloud resources, such as the CPU usage of an EC2 instance running your application code. You can configure CloudWatch to automatically trigger custom actions based on metrics using alarms. An alarm watches a metric over a specified time period and performs actions when the metric meets a configured threshold. A powerful feature of CloudWatch is the ability to collect aggregated metrics from various cloud resources. The CloudWatch Logs agent can be used to send system-level metrics to CloudWatch to create a single source of truth for all of your application logs. CloudTrail is an AWS service that provides logging of any changes within your AWS account through API calls. Any actions through the AWS console will create an API that will be logged to CloudTrail as an event. You can create trails to organize and audit related events within your AWS account. CloudTrail can easily be integrated with simple notification service to send out notifications, such as emails, for any changes to your infrastructure. An easy way to differentiate CloudWatch and CloudTrail is that CloudWatch stores logs for your application, where CloudTrail stores logs for your AWS Cloud infrastructure. Application events, failures, and metrics would be aggregated to CloudWatch so that AWS users could easily view and query logs. Any changes to your AWS infrastructure would be logged in CloudTrail. CloudTrail provides a complete audit of any API call made within your AWS account.
Enabling Secure Communication Between AWS ServicesWe are going to demonstrate how to create a secure application within AWS using a combination of the service offerings that we have reviewed. To accomplish this, we will create an IAM policy that allows our application to access an AWS DynamoDB database. We will use KMS to encrypt data in the database with an encryption key, and we will view application and infrastructure logs with CloudTrail and CloudWatch. For this tutorial, we're going to be using AWS Lambda combined with API Gateway to create an API that would be used to return a user's bank account balance from a DynamoDB database. This will not be a coding tutorial. However, in this tutorial, we will use the AWS Management Console to show how to enable secure communication between AWS services. To get started, we will find API Gateway in the list of available services in the AWS Management Console. In API Gateway, you can see that we have an API created already to get a user's account balance. We can look at the stages of this API gateway to find an invoke URL. This will allow us to invoke our AWS Lambda function through a REST HTTP endpoint. As you can see, our API returned a 500 statusCode indicating that it did not execute properly. If we look at the response body, we can view an error message indicating that the AWS Lambda function does not have sufficient privileges to access the DynamoDB database. To fix this issue, we'll need to ensure that the AWS Lambda function has an IAM role with an attached policy that allows access to dynamic. If we head back to the AWS Management Console, we can find the Lambda AWS service offering. Here we can view our getAccountBalance lambda function. In the Lambda dashboard, we can view our triggers, application code, and any IAM roles attached to our lambda function. In the Execution section, we can see there's a service role called accountBalanceRole attached to our function. Based on the error message we received earlier for insufficient privileges, we can assume this role does not have an IAM policy attached that allows access to DynamoDB. To add an IAM policy, we'll go to the IAM dashboard from the AWS Management Console. Here we can click on Roles to find the role that's associated to our lambda function. Then we will select the accountBalanceRole so that we can attach a new policy to it. In the Role Summary page, you can attach and view all policies associated to your IAM role. We can see that there is not a policy attached for AWS DynamoDB. So we'll click Add inline policy so that we can create one. When creating a policy, we can use a visual editor or manually type in JSON to create an IAM policy. Using the visual editor, we're going to search for the DynamoDB AWS service so that we can add it to this policy. Because our Lambda function is only used to get account balances and not to update them, it's important for security reasons that we only give it read-only access. To create secure applications, it's important to always use the least responsibility principle and give resources only the minimum amount of access they need to execute their job function. In the Resources section, we can select the specific DynamoDB tables that we want to give access to. In this example, we are going to assign all resources to this policy. However, for increased security, it would be more ideal to only specify the specific table we need access to. Once we've selected all resources, we will click Next to review the policy that we've created. We'll define a policy name of getAccountBalancePolicy because that's indicative of what the policy is allowing. Now that we have a name, we can select Create policy to finish creating the policy. Now you can view that the policy is attached to the IAM role that we had to find previously. We select the policy. We can view the policy summary and the JSON that corresponds to this policy we've created for DynamoDB access. Now that we've added DynamoDB access to the role for our Lambda function, we should be able to get a user's account balance. If we head back to API Gateway, we can test to see if our recent role change fixed our Get Account Balance API call. We can navigate back to our production stage to get the invoke URL for our service. Selecting this, we can view that now we get a status code of 200, which indicates a successful API call. And in the response body, we can view that our user Ford Prefect has an account balance of $ 42.00 in his account. This is an example of using IAM to enable secure communication between various AWS services. AWS provides a full audit trail of everything that happens in your AWS infrastructure through CloudTrail. This provides users the ability to fully monitor and manage their entire infrastructure. You've now learned how to use identity and access management to create secure applications within AWS. You understand the fundamental concepts on how to view application logs with CloudWatch and an audit of infrastructure logs within CloudTrail. We know that KMS is the AWS service that can be used to encrypt and decrypt data in the cloud. So now you should have a better understanding of how to create secure applications, which is important for a variety of reasons, such as it allows your applications to scale to an enterprise level, it protects sensitive data against malicious attacks, and it can increase application revenue by providing your customers a secure solution. And now we'll head into the next module where we will go into more detail of cloud security within AWS.
Define the Shared Responsibility ModelUnderstanding the Shared Responsibility ModelHello. My name is Jordan Yankovich. In this module, we will review the AWS shared responsibility model to better understand the delineation of security requirements for Amazon and the customer for applications hosted in the AWS Cloud. We will also identify the three different service types defined in the shared responsibility model and do a deep dive on a few AWS services to show how to secure our cloud applications. In this module, you'll learn about the shared responsibility model for applications in AWS and how the secure applications can be deployed leveraging AWS existing services. We'll learn about the responsibilities we have as a customer to deploy secure applications in the cloud and how we can leverage AWS service offerings for faster application development and added security. You'll learn about the security responsibilities for Amazon and the responsibilities for customers of its system and how it pertains to your application development on their infrastructure. We'll learn about the three AWS service type categories used for the purpose of understanding security and shared responsibility. You'll learn how the infrastructure category is used to identify computing servers and that a customer will be responsible for controlling the operating system and identity management for these services. Next, we will cover container services. Some container services don't require the customer to manage the operating system or platform layer. Rather, we will be responsible for network controls, such as firewall rules and access management. Lastly, we will learn that abstracted services, such as Amazon S3, are used for high-level storage, and typically you are only responsible for managing the integration with identity and access management, otherwise known as IAM. A short definition of the shared responsibility model is that AWS is responsible for security of the cloud and that the customer is responsible for security in the cloud. That sounds nice, but what does that mean exactly? Well, security of the cloud relates to the physical infrastructure of Amazon's entire cloud platform. This means that the physical servers running AWS services and applications will be secured and maintained solely by Amazon. That means you could have a secure environment and will never have to interact with any physical computing device while using the AWS Cloud if you don't want to. But there's more. Amazon also provides secure communication across its physical network and plenty of AWS managed service offerings to help speed up your application development without sacrificing security. From authentication to our applications and even our infrastructure, it will be our responsibility to abide by security best practices. For example, AWS provides identity and access management as a service to secure communication between cloud resources. However, it is our responsibility to configure and use this service properly. In the cloud, we will also be responsible for encrypting application data, configuring security groups and firewalls, and preventing our application from malicious attacks. AWS provides a suite of services that enable cutting-edge application security, but it is our responsibility to implement them and configure them properly to meet our security and compliance needs. AWS is responsible for both hardware and software within their cloud environment. Amazon has a very complex infrastructure spread across the planet. They're responsible for ensuring that their network of datacenters are available and reliable for mission-critical applications. Their infrastructure includes an array of regions to manage network latency and availability zones for fault isolation. For software, Amazon is responsible for technology services it provides for your infrastructure and applications. IAM is an AWS-hosted service that is controlled by Amazon. Although AWS is responsible for maintaining the service, the customer is responsible for all IAM configurations within their account. Let's do a deeper dive into all of AWS's responsibilities. A fundamental concept of cloud technology is that the customer no longer needs to manage any physical hardware. So AWS's primary responsibility is the vast network of datacenters and physical servers they provide their customers via their easy-to-use management console. On top of their cloud network, Amazon has created an integrated suite full of robust service offerings for creating applications in their environment. The secure networking between these services is also fully managed by Amazon, but configured by the customer. Storage platforms, such as S3, are provided and maintained by Amazon. You'll never had to write code to store static files or maintain memory for file storage when using their storage options that elastically scale. Amazon also offers fully managed database services, such as DynamoDB that take away some of the common burdens of maintaining an enterprise-level database. Identity and Access Management, commonly referred to as IAM, is a fully managed AWS service offering that provides access management and authentication for users and between AWS services. When using IAM roles and policies between AWS services, IAM will automatically manage key rotation and authentication between the services for you securely. In the shared responsibility model, as the customer, we are responsible for the security in our cloud environment. For all applications we host on AWS, it is our responsibility to ensure they are secure, available, and configured to scale and meet demands. We will also need to configure our firewalls and authentication to allow access into our systems. By default, AWS systems and services deny all access unless configured otherwise. We may host our application data on AWS services, but we will need to encrypt private data if necessary and protect all data from unauthorized access. Even when using AWS infrastructure, the customer has many responsibilities to ensure their application meets security and compliance requirements. As a customer, we are responsible for all hosted applications and application code within our AWS account. We are responsible for implementing proper authentication and access principles for users of our applications. Data encryption and protection are a customer responsibility. AWS services, such as Key Management Service, can take some of the burden away from the customer, but it's our responsibility to configure and use them properly to ensure our data is encrypted and secure. It is always more secure to have your data encrypted at rest and in transit. Data at rest includes records sitting in your database or files in S3, while data in transit could mean data flowing through an API call or between AWS services, such as an SQS queue. Web application firewalls and virtual private cloud configurations are customer responsibilities to restrict and secure access to cloud resources from trusted sources. AWS services offer many solutions for protection of your resources, but it's important to understand the fundamental concepts of security in order to configure these properly and prevent malicious attacks. Lastly, we are responsible for managing users within our account and creating IAM roles and policies for communication between our cloud services. It's important to always apply the principle of least privilege, which means to only give a user or a resource explicit access to the resources it needs to perform its task and nothing more. Providing additional access to users and resources that it doesn't need will only create a security vulnerability within your system. Let's look at an example of a simple AWS application. A web application will communicate to an application running on an EC2 instance via a REST API call. The application will return data to the web application from an AWS hosted database. Let's break down the simple example with our knowledge of the shared responsibility model and determine what components of this setup will be AWS's responsibility versus our responsibility as the customer. AWS will be responsible for the security of the cloud, so that would include the physical machine our EC2 instance is running on and the hardware or managed AWS service for our database. We are responsible for security in the cloud. So the application running on the EC2 instance would be our responsibility to secure and the data protection and encryption that goes into our database. This looks good, but we are still missing a very key piece for security. Our application right now doesn't have any authentication, and we need to implement better security. So in order to do this, we would need to add authentication to our API that allows communication for our web application for our EC2 instance. Also, we will need to provide an IAM role to our EC2 instance with appropriate permissions to communicate with our database. Remember, always use the principle of least privilege access when allowing access to AWS systems. As an AWS customer, it is our responsibility to ensure proper authentication for our cloud applications.
Securing Infrastructure ServicesFor the purpose of understanding security and the shared responsibility model, AWS has broken up all service offerings into three different categories. Compute services belong to the infrastructure category. These services allow you to build and architect applications similar to how you would with an on-prem solution. For services in this category, we will control and manage the operating system. Container services are AWS managed services that typically run on an infrastructure service, such as EC2. For these services, we typically will not manage the operating system or the platform layer. And lastly, abstracted services are services in which we will interact with them using AWS APIs, and AWS will fully manage the underlying service components or the operating system on which they reside. Let's dig into the shared responsibilities for the infrastructure service types a little bit deeper. Remember, infrastructure services are services that provide computing power. So for these services, AWS would be responsible for the physical hardware and global infrastructure on which the applications using these services would reside. All AWS's foundational services used to interact and maintain the infrastructure services would be managed by AWS. But any application that we host using these services would be our responsibility. We would need to manage the operating system, application code, firewall configurations, and authentication and access to the compute instance. You can see that IAM is listed as both an AWS and a customer responsibility, which can seem a little confusing at first. IAM will be used to authenticate and provide identity to users and resources that interact with services in the infrastructure category. AWS will be responsible for the actual IAM service offering, but we will be responsible for configuring and using the service properly to secure our infrastructure service instances. There are several AWS service offerings that fall into the infrastructure category. But let's cover a few of the most frequently used ones and talk about our responsibility as the customer to secure these services. EC2 is an infrastructure service that provides resizable computing capacity. This service literally provides users access to servers within Amazon's data centers to build and host our own applications. As an infrastructure service, we would be responsible for controlling the operating system and applications that reside on these instances. It will also be our responsibility to ensure access to these instances is restricted to only known entities and is secure. EBS provides persistent block storage for our ECS instances. Just like EC2, all of our EBS volumes will exist within an actual AWS data center. We won't be responsible for any of the hardware in which are EBS volumes run on, but securing the volumes and managing the data on them will be our responsibility by the shared responsibility model. A virtual private cloud, or VPC, is an isolated section of the AWS Cloud in which we can host our cloud resources. All configurations for our VPC, such as firewalls, route tables, and network gateways, we will have control over and should implement best practices to secure our cloud resources appropriately. Now that you have some context, let's take a firsthand look at how to use an infrastructure service. For this demo, we will create a compute instance in the cloud and securely access the instance after creation. To accomplish this, we will create an EC2 instance using the AWS console, set up a key pair that will be required to connect to our instance. Then we'll enable inbound traffic to our instance so that we can access it via SSH. And lastly, we will connect to our instance using an SSH protocol. We'll start by finding the EC2 service in the AWS console and selecting Launch instance to create a new EC2 instance. Here we can choose an Amazon machine image, or AMI, to use for our EC2 instance. We're going to go ahead and select the free tier Linux instance to get started. We'll use the free tier eligible instance type and press Review and Launch to create our EC2 instance. Everything looks okay, so I'll press the Launch button. We are now being prompted to select a key pair for our instance. A key pair consists of a public key that AWS stores and a private key that we store. In the shard responsibility model, we will be responsible for managing and securing all private keys that we create. We'll go ahead and create a key named testkeypair. Select Download Key Pair to download the .pem file. It's important to keep our private key file safe because anyone that has access to this file will be able to connect to our EC2 instance. In our private key, we'll select Launch Instances to create our EC2 instance. Once it's done loading, we can view that our instance is now available in the EC2 dashboard. In the EC2 dashboard, we can select the Connect button to get more information on how to connect to our instance. Because this is an instance created from the Linux-based AMI, we are prompted to use an SSH client to connect. Users running a Windows-based operating system can use PuTTY to connect using the SSH protocol. However, I'm running macOS, so I'll use the built-in terminal because it supports the SSH command. If we scroll down, we can copy the example SSH command that can be used to connect to our instance. We'll copy this into our clipboard so that we can easily use it in a terminal later to connect. If we open up the terminal in the same directory that contains our testkeypair private key file we created earlier, we can execute the SSH command to connect to our instance. We can see that we get an error message indicating we have an unprotected private key file. To fix this, we can run the chmod command to make sure our file is not publically viewable. If you're worried about forgetting this command, it can always be found on the same window that has the connection information for your EC2 instance. Now that we've resolved the issue, we can rerun the SSH command to successfully connect to our AWS EC2 instance. If we think back to the shared responsibility model, for infrastructure services, such as Amazon EC2, we are going to be responsible for the operating system and the applications that we run on the instance, as well as the security of the private keys we use to connect.
Securing Container ServicesContainer services are services that are heavily managed by AWS. For services that fall in this category, AWS will manage the operating system and the foundational service entirely for you. AWS and the customer will both have responsibilities regarding the firewall configurations for these services. This is unique to the container service category. As the customer, our primary responsibilities will be to secure the data and for the firewall rules to access these container services. And again for the container services, the customer and AWS will have shared responsibility for the identity and access management of the services. Let's look at a few of the container services AWS provides and understand what our responsibility would be in regards to security. RDS is a service that makes it easy to set up and scale a relational database in the cloud. RDS takes away many of the tedious maintenance tasks associated to hosting an enterprise-level database. AWS is responsible for providing this service, but we will be responsible for all data that resides in the database and configuring the database access properly. Elastic Map Reduce, or EMR, is a container service that allows users to easily run and scale big data frameworks in the cloud. Our responsibility to secure this service is similar to RDS in that we will need to secure all data within EMR and control access to the service within our account. Elastic Beanstalk is a powerful AWS service that allows you to quickly deploy an application without having to worry at all about the infrastructure. When using this service, some of the security tasks you will be responsible for will include configuring firewalls and securing and encrypting your application data. Now that we have a better understanding of what a container service is, we are going to get some hands-on experience securely implementing one in the AWS Cloud. We are going to demonstrate the customer's responsibility of securing access to a container service by implementing security groups. The security groups will limit the inbound connections to an RDS instance based on their IP addresses. To demonstrate this, we will first connect to a MySQL RDS database instance that is publically accessible over the internet. Then we will update the security group, which acts as a firewall for container services to limit incoming traffic to a specific IP address. And lastly, we will test the new security group rule by ensuring access is blocked from unknown IP addresses. Okay, so to get started, we will head to the RDS service in the AWS console. From the RDS dashboard, we can view all of the database instances that we have created. So we will go ahead and select DB Instances to get the connection information to our running instance. In our database list, we have a testdbinstance created for this demo. So let's go ahead and select that to move forward. This page will show us all of the information regarding our instance, including connection information, database logs, backups, and much more. If we scroll down to the Connectivity and security section, we can find the endpoint and port used to connect to this instance. Now that we have our connection information, we can use a database client to connect to our database instance. Our RDS database is of the MySQL database type. So we're going to use a MySQL client called MySQL Workbench to demonstrate how to make connections to this database. If we select the view connection information for RDS, we can see the host name and port matched the information from our AWS RDS dashboard for our test instance. In this configuration, we can also enter a username and password that will be required for us to connect. Now that we have verified the connection information is valid, we will close and attempt to connect to our instance. We can see that the connection worked successfully, and we can now view the schemas for our cloud database. Let's reflect back on the shared responsibility model for container services, such as RDS. We know it's our responsibility to secure the connections to these AWS services. So if this database held potentially secretive data, we would want to restrict connections to only known entities. To do this, we're going to head back to the AWS console. We can view the security group rules for this database instance directly in the dashboard. Security groups operate similar to a firewall for your services. By default, all inbound and outbound traffic to RDS is blocked unless there is a specific security group that allows access based on the IP address. We can view that the inbound security rule is set with an IP address of all 0s. This indicates that any IP address will be acceptable by this rule. It is never best practice for a security group to allow inbound access by every single IP address possible. So in order to make this RDS instance more secure, let's go ahead and update the security group. In the security group's page, we can view the inbound and outbound rules for this security group. Again, we can see that our security group allows all inbound access from any IP address indicated by the all 0s in the IP address field. So let's go ahead and update that. If we select Edit, we can configure our security group. The source field is what is used to indicate the IP address that is allowed access. We can set the source field to our IP address or any custom IP address. If you have a hosted database that only your company would need access to, it would be a good idea to secure and limit the IP addresses in your security group to a known range of IP addresses, such as your company's network. It is a security risk and a potential vulnerability for attack for your system if you allow inbound access from all IP addresses. To demonstrate how security groups work and will block inbound traffic for unknown entities, let's type a random security group number in the source field for the IP address and then try to connect to the instance again with MySQL Workbench. Let's save our changes and head back to our database client. This time we'll use the exact same connection information stored that we had before to try and connect again. And sure enough, our connection is now blocked. This shows how security groups for container services act as a firewall for incoming traffic. Once we updated our security group to only allow access from a single IP address, all other IP addresses are now blocked and cannot connect. within the shared responsibility model, it's important to remember that we are responsible for the security groups and their configurations to create a secure cloud environment within AWS.
Securing Abstracted ServicesThe abstracted service category type in the shared responsibility model is used to categorize services AWS provides for solutions, such as storage and messaging services. For these services, AWS will be responsible for the infrastructure, operating system, foundational services, and platforms for the service offerings. As the customer, we will be responsible primarily for managing the endpoints that are used to store and retrieve data while we let AWS manage the underlying service components or operating system for these services. AWS has many popular service offerings that fall into the abstracted service type. S3 is storage for the internet providing users the ability to easily store and retrieve any amount of data in the cloud. For S3, we will typically be responsible for securing the data we upload and for the configurations to restrict and allow access to our S3 buckets and objects. DynamoDB is an AWS-hosted NoSQL database solution. With DynamoDB, we can use AWS endpoints to create and use cloud-hosted NoSQL databases. Similar to S3, we will be responsible for the data we upload and how it's accessed. SQS queues are abstracted services used to decouple microservices in the cloud with a queue. AWS will manage the queuing service for us, but we will need to create security policies for our queues to ensure only known entities are used to add, read, or remove data from our queues. Glacier is a low-cost service used for storing infrequently used data. Managing Glacier will be very similar to managing S3 because Glacier is a very similar solution. It's important when archiving S3 data into Glacier that Glacier has sufficient security implementations to restrict access. For example, if a user or resource is unable to access files in S3, they likely should not be provided access to access the files once they've been moved to Glacier. We now know the different types of AWS abstracted services. So let's take a look at how we can secure an abstracted service in the cloud. To see this in action, we are going to use S3 to store files in our cloud AWS account and restrict access to the files to unknown entities. To demonstrate some of the security measures you can implement with S3, we are going to view files in an S3 bucket. We will show how public files are accessible to anyone without any security restrictions and then how to restrict bucket access in various ways, including using public access settings to set global settings for an S3 bucket, access control lists, or ACLs, which can be used to add read or write access for a bucket to a different AWS account, and bucket policies, which can be used to create advanced configurations for access, such as allowing entities to only access a specific directory within a bucket. For this demonstration, I've created a bucket called pluralsight-introduction-to-aws-security. In this bucket, you can see that we have a test image. If we select the image, we can view the image properties page and set specific permissions for this image solely. Let's go ahead and select to make the image public. If we scroll down to grab the object URL, we can then navigate to that URL in our browser to view our public image. A public image is one that can be accessible from anyone anywhere. So any computer that has this URL would be able to download and view this image. When hosting public websites, it can be useful to have images that are fully accessible to any device. However, it's very important for security that any image that does not fall into this category is not made public and has the proper security restrictions. Back in the S3 dashboard, we can view the specific permissions for this image. Here we could enable or disable public access for this image specifically. However, if we wanted to make a change to all images within our bucket and not just affect one image, we could actually update the security settings for the entire bucket at once. At the bucket level, we can set permissions in a variety of different ways. We can use public asset settings to manage if the bucket is private or not. We can use access control lists, also referred to as ACLs, to enable other AWS accounts' access to our buckets. And we can create granular bucket policies for advanced configurations. Bucket policies are very powerful. For example, if you have a social media app that hosts images in an S3 bucket, you could use bucket policies to restrict access to images and files in the same bucket on a per-user basis. That would mean that one user would only have access to their images within a shared S3 bucket. For our example, we are going to remove public access on this bucket and see how it affects our image from before. When we select Continue, we'll have to type confirm because this could be a breaking change for our environment. Once we've updated our configurations to restrict public files, we're going to head back to our image to see how it was affected. If we go back to our sample image, we can select the Permissions tab to view the permissions for this image. We can now see that the image no longer has public read access. So it shouldn't be accessible from the URL to any resource anymore. If we grab the object URL again, we can try to open up this image in a browser. We now receive an AccessDenied message when trying to retrieve our image. This indicates that our image is no longer publically accessible to anyone over the internet and is now more secure. Specifying access and securing files in S3 can be quite complicated. But this quick example shows just how easy it is to prevent private files from being accessible over the internet in S3. Let's do a quick overview of what we learned in this module. We learned about the AWS shared responsibility model and how it specifies the security requirements of a cloud infrastructure for both Amazon and the customer. We covered the three types of AWS service categories defined by the shared responsibility model, including infrastructure services that offer compute power, such as EC2, container services that offer AWS managed solutions, such as relational database service, and abstracted services that offer cloud storage, such as S3. We've talked about how important having a secure cloud environment is to your application. Having a secure cloud allows you to create enterprise-level applications with upwards of millions of daily active users, and it also allows you to protect your data in the AWS cloud. That's it for this module. Continue on to our next module, and we will cover how to maintain a secure cloud environment in AWS.
Maintaining Physical and Environmental Security of AWSMaintaining a Secure EnvironmentHello. This is Jordan Yankovich. In this module, we are going to learn about how to properly maintain security within an AWS Cloud environment. As our environment and applications grow, it's important to monitor and increase our security measures in our environment. Let's get started. Maintaining infrastructure security can mean a lot of things. In this module, we will cover a few processes you can implement as a security administrator for your environment to reduce your vulnerabilities for a cyberattack. We previously covered the responsibility of least privilege, which is the security concept that users and resources should only be provided implicit access to the resources needed to perform their functions and nothing more. We will learn how to create granular access with IAM policies in order to abide by this principle. You'll learn how logging can be implemented to better the security of your application and how you can set up automated alerts for application failures or application custom events. Then, we will cover how to detect potential security vulnerabilities within your AWS account. You will learn how to use Amazon Inspector to create security assessments curated specifically towards your AWS Cloud environment's vulnerabilities and how you can address and mitigate these security vulnerabilities for your applications within the cloud. Although there are many AWS services that can be used to increase your cloud security, we will only focus on three core concepts within this module. We will cover how we can manage access for users and resources using identity and access management, or IAM, how we can view and query application logs in CloudWatch and set up alerts to notify us of specific events, such as application failures. And lastly, we will cover how we can assess the security of our environment by running scans with AWS Inspector. These scans will identify potential security threats and provide prioritization for us of what security issues we should mitigate first.
Managing Environment AuthorizationA large part of controlled the security of your environment is managing the access to resources. Limiting access to unknown entities and providing authorization to known entities can be quite the challenge in a very complicated environment. We already learned that IAM will provide authorization to AWS resources to IAM identities. Identities can be either users, groups, or roles. Roles can then be used to provide other AWS resources access to an AWS resource, such as providing a lambda function access to an RDS database. With IAM, you can provide access directly to AWS users or even federated users. Federated users are users that sign in through an external identity provider. For example, imagine you host an ecommerce website and you want to make the purchasing process as easy as possible for your users. You could then use a common identity provider, such as Facebook or Google, to create federated users that will be allowed access to your application within AWS. Next, we are going to learn how we can create granular permissions for IAM to increase the overall security of our environment. Granular permissions are created by the use of custom IAM policies. These policies can be created in a variety of ways, such as using the console policy generator or by using JSON or YAML to configure the policy directly. When maintaining our cloud environment, it is our responsibility to use the principle of least privilege to ensure our applications are secure and meet compliance standards. By creating granular permissions for our IAM identities, we drastically reduce the possibility of a cyberattack on our infrastructure. This is an example of an IAM policy that provides full DynamoDB access. An IAM policy can consist of single or multiple statements. A statement includes all of the information regarding a single permission for an IAM identity. Inside the statement, the Effect section will consist of either an allow or an explicit deny. AWS identities, by default, are not provided access to any AWS resources. However, in certain cases, it may be useful to use an explicit deny for a resource. Because our policy provides full DynamoDB access, the Effect section for this policy is set to Allow. The Action section is used to describe the action that is allowed or denied by the effect. The general syntax for the action is the name of the AWS service followed by a colon and then a specific command to provide or deny access to. An asterisk character is used to denote full access to all actions. So for this example, our action is allowing full access to all DynamoDB action commands with the asterisk. The Resource attribute specifies the AWS resource that is associated with the action. Again, for our resource, we are using the same character as before to specify full access to all resources. This policy will work. However, it violates the principle of least privilege because it allows full read and write access to all DynamoDB tables. It would be more secure to specify specific actions and resources for this policy so that we don't provide more access to our IAM identities than needed. Let's take a look at another policy that will provide access to only a few DynamoDB read functions for only a specific DynamoDB table. Our IAM policy still has the same main components, such as a statement, effect, action, and resource. However, we can see that our Action and Resource elements vary from the previous example. We updated the Action section to an array and clearly indicated two different DynamoDB functions to allow access to. The DynamoDB Get and Query functions are included in the policy with the typical action syntax of the AWS service followed by the colon and then the API name we are providing access to. Our Resource section now contains an Amazon resource name typically referred to as an ARN for a specific DynamoDB table called MyTable. ARNs are commonly used in AWS services, such as IAM, to identify specific AWS entities. A security best practice is to always use ARNs in the Resource section of IAM policies rather than providing full access to any resource. If you have a production database in the same AWS account as the lower environments, you can use this approach to have an IAM policy that prevents your development resources and IAM users access to your production database tables. Obviously, it's much faster to use Amazon's managed policies and provide more access to your resources than they actually need. But it's important to remember that for security, especially for production environments, providing full access to a resource within IAM policies is always a security vulnerability.
Logging Fundamentals in CloudWatchWe've already explored CloudWatch as a service and know that it is used to store logs of our applications and AWS resources. Log events in CloudWatch record application activity and can be accessed directly in the AWS dashboard. When maintaining security of a cloud environment, it's important that you have sufficient logging set up to monitor your application status and troubleshoot potential issues if they arise. In the AWS dashboard, you can search CloudWatch logs with an easy-to-use interface. This allows users to easily find certain logs or events that have occurred in their CloudWatch logs groups. When maintaining your infrastructure in AWS, setting up automated alarms with CloudWatch can take some of the burden of monitoring a complex environment away from you. Alarms can be set to immediately notify your IAM users with emails or text messages for important events that may occur. You can configure a CloudWatch alarm to send notifications to you immediately when any unexpected application errors occur. This will allow you to immediately address any errors once they occur rather than having to wait until they are reported from your application users. CloudWatch alarms can be used to configure autoscaling of your application. For example, you can create an alarm once your EC2 instance running an application exceeds a certain CPU capacity. This alarm could trigger another EC2 instance to be created to handle the increased traffic on your application. This set up, your application will then be able to handle gradual traffic spikes without any user intervention. A common industry term for this is elastic scaling. Alarms can be configured for situations when a user-defined metric exceeds a threshold during a specified time period. Let's imagine we are creating an application that monitors the stock market used for day trading. We could create a CloudWatch alarm that notifies us immediately once a threshold is hit for a certain stock price, indicating that it is a good time to buy or sell that stock. Alarms based on metric thresholds are very useful, and it's up to us to create good metrics for our alarms that are custom to our application and our cloud environment.
Searching through Service LogsIn this demo, I'm going to show you how we can monitor our applications in AWS services using CloudWatch. We will view application logs and use the search functionality to query for specific messages within our log events. Then we will create an automated alarm that will notify us of application failures. For this tutorial, let's pretend our service we used earlier in this course for finding a user's account balance is failing. As a security expert, how could we track down and find the appropriate error logs for a failing service? If you remember correctly, our Get Account Balance service was running in a lambda function. Lambda functions are serverless, and therefore the physical machine they actually run on is terminated after execution. So we need a way of finding logs for our application after the server has terminated. Let's get started. To find application and service logs, we will head to the CloudWatch service in the AWS dashboard. In the CloudWatch service overview page, we can view alarms and metric overviews for our AWS account. Because we are looking for logs specific to AWS lambda, let's select the Logs option from the sidebar to view our CloudWatch logs specifically for each AWS service. In the Log Groups page, you can view a list of all logs group you have created. You can see the standard syntax for log groups contains the service name between the backslashes. Let's click on the Get Account Balance logs group for AWS Lambda to view the logs we have aggregated for that lambda function. You can see that we have a lot of log streams for this service. We mentioned earlier that lambda uses serverless computing. So when the function is not executed after a specified time period, the server running the function will terminate. Therefore, it's common for AWS Lambda to create new log streams frequently because every time it's invoked, it could be running on completely different hardware. When trying to manually track down an error, traversing through each log stream would be quite a tedious task. Luckily, AWS has a powerful search service integrated with CloudWatch that we can leverage. Let's click Search Log Group so that we can search all of our log streams for this lambda function. In the Filter events page, we can search for a specific text and error message within our log streams for a specified time period. Since we are searching for application errors, let's just search for the text error and see what it comes up with. When you want to search for a specific text, enclose the text you want to search for inside double quotes to indicate that you want to search for a specific string. After we type error in double quotes and press Enter, you can see that Amazon is automatically searching for that text within our aggregated log streams for us. Depending on the amount of data you're trying to parse through and the date range you specify, this can take a while. Once it's done searching, we can see that we have multiple matches. If we select to collapse one of our matches, we can now view the specific error log that includes the text error. To get more information on a specific error thrown, we could select to show the error log in the log stream to view all of the CloudWatch logs before and immediately after the error was thrown. Logging is an important aspect of security. It allows you to easily track down and mitigate issues within the cloud, and it can provide a full audit trail of everything that occurs within your applications. Searching through CloudWatch logs is an easy task. However, we wouldn't necessarily want to do this every day, especially if we're only doing it to determine if our application is functioning properly or not. Setting up alarms for important events and failures is important because it can increase the security of your infrastructure. If and when an error does occur, it's always best to be notified of it immediately to prevent application downtime. Luckily, AWS has automated alarms that integrate nicely with CloudWatch to accomplish this. With automated alarms, you can immediately set up notifications via text or email for any sort of application events that you want to track. To create an alarm for our lambda function, let's select Alarms from the left-hand sidebar to get started. We'll select Create Alarm to create a new alarm. And in the Metric screen, we'll define Lambda as our primary metric. We want to know about application errors, so let's search for our function name and select the metric name for errors. From here, we can create a name for our alarm and a description that lets us know what this alarm is trying to accomplish. We'll name our alarm Account Balance Error and a description that accurately depicts what the alarm is used for. We'll set the alarm to trigger whenever the errors is greater than or equal to 1 for any data point. We'll treat the data as bad because a single error is definitely not a good thing. And lastly, we'll add a notification list of users to notify whenever this alarm is breached. You can see that my email address is set here. So now that this alarm looks good, let's go ahead and create it. When adding an email address to an alarm for the first time, you will have to opt in to alarm email notifications in your email account provider. You'll be notified to click a link in an email from Amazon to subscribe to the error messages. So once I've already subscribed to the error messages, let's manually trigger an error for this lambda function to ensure that the alarm is configured properly. So to do this, let's head to the lambda function in the AWS console. I've already created a test that will throw an error by providing the lambda function bad data. So let's press the Test button to execute this and manually thrown an error. Great. So we can see that the lambda function throw an error. So now let's confirm that we got an error alarm in our email. And if I check my email, sure enough, I have an email from Amazon notifying me immediately that the Lambda function alarm was triggered and that an error occurred. Great. Now we know how logs are aggregated into CloudWatch for each AWS service offering and how we can search through multiple log streams in order to track down specific events of interest within our environment and how we can set up automated alarms to notify us of important events within our AWS account.
Detecting Environment VulnerabilitiesAmazon Inspector is a vital service for maintaining the security of an AWS environment. The service provides inspections and vulnerability scans custom to your cloud. The security scans it runs will determine if your application is using security best practices and will make recommendations to update your environment for increased security. Inspector, by default, will scan your AWS services. However, you can also install the Amazon Inspector agents on your EC2 instances directly to test the network accessibility and security state of the applications on those instances. There are plenty of benefits AWS Inspector provides when managing and maintaining the security of the cloud. It will prioritize security vulnerabilities for you so you know exactly what actions you should take to secure your environment. It will limit the possibility of a cyberattack on your application, and it allows your infrastructure the ability to scale to an enterprise level without having the risk of being attacked. We've learned a lot so far over the last three modules. You've learned various AWS services and how they relate to the overall security of your cloud infrastructure. We've learned how to create AWS users and how to authorize IAM identities using IAM. We've covered the shared responsibility model and how it defines a difference in responsibilities for AWS and the customer. You now know the customer and AWS responsibilities for all three of the three service types. Infrastructure services, such as EC2, are services that provide compute power. And as a customer, we know that we are responsible for the operating system and the code hosted on the instance while AWS is responsible for securing the physical compute hardware. Container services, such as Relational Database Service, are services where AWS is responsible for managing the operating system. And as a user, we are responsible to secure our usage of that service. Abstracted services, such as simple storage service, we know that we are responsible for securing the endpoints in which we use to store and retrieve data. We learned how to maintain and monitor the security of our AWS account. We used CloudWatch as an aggregated logging and monitoring solution. We learned how to create granular access to resources by using custom IAM policies for IAM identities. We covered how we can run assessments for exposure, vulnerabilities, and deviations from best practices by using Amazon Inspector. Thank you for viewing this course. I hope you learned how to better secure your cloud environments and that you found this course useful for your endeavors. If you have any questions, you're more than welcome to reach out to me directly. Cheers.


AWS Cloud Security Best Practices

1.     Least Privilege
2.     Handle Keys with Care
3.     Encrypt "All the things"
4.     Continuous Monitoring: (Log and monitor authorized and unauthorized activity)
a.     CloudTrail (Monitors users action in AWS)
b.     VPC Flow Logs (Logging network activity)
c.     S3 Access Logging
5.     Audit Regularly
6.     Hardening Tasks
d.     Disable apps that authenticate insecurely
e.     Disable non-essential network services on startup
f.      Ensure that software installed doesn't use default internal accounts and passwords
g.     Ensure Acceptable Use Policy is not violated
7.     Classify Assets and Separate Components with different security levels
8.     Separate administrative level access. Implement role separation and access controls to limit access
9.     Verify that time-sychronization technology is implemented and kept current

1 comment:

  1. This insightful guide outlines AWS Cloud security best practices, emphasizing services like IAM, VPC, KMS, CloudTrail, and CloudWatch. It provides a practical approach, demonstrating the creation of IAM users and groups, adherence to the principle of least privilege, and real-world applications in scenarios like an online banking application. A valuable resource for AWS security.

    ReplyDelete