diff --git a/content/AWS/01_intro_to_cloud_console.ipynb b/content/AWS/01_intro_to_cloud_console.ipynb index 5b51c1a..ff5b77d 100644 --- a/content/AWS/01_intro_to_cloud_console.ipynb +++ b/content/AWS/01_intro_to_cloud_console.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "dc57021c", + "id": "64325d0f", "metadata": {}, "source": [ "# Introduction to the AWS Console\n", @@ -28,13 +28,13 @@ }, { "cell_type": "markdown", - "id": "502f2360", + "id": "54c59223", "metadata": {}, "source": [] }, { "cell_type": "markdown", - "id": "5a252bd1-ac08-455f-9f5a-1049676173af", + "id": "2477a2d7", "metadata": {}, "source": [ "## Setup\n", @@ -49,7 +49,7 @@ }, { "cell_type": "markdown", - "id": "9108a06c-ac48-4c70-a5cd-03f74ff1f67c", + "id": "c72b28be", "metadata": {}, "source": [ "## Logging in to the console\n", @@ -64,7 +64,7 @@ }, { "cell_type": "markdown", - "id": "930220c5", + "id": "06ba8d29", "metadata": {}, "source": [ "## Key concepts and components of the AWS console\n", @@ -74,7 +74,7 @@ }, { "cell_type": "markdown", - "id": "22bf4b20", + "id": "b9a21e7b", "metadata": {}, "source": [ "Figure 2 lists the basic components you will see when you first log in to the AWS console. \n", @@ -97,7 +97,7 @@ }, { "cell_type": "markdown", - "id": "b2240a49", + "id": "14396589", "metadata": {}, "source": [ "```{admonition} Exercise\n", @@ -112,7 +112,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, @@ -126,7 +126,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.5" + "version": "3.10.2" } }, "nbformat": 4, diff --git a/content/AWS/02_intro_to_compute.ipynb b/content/AWS/02_intro_to_compute.ipynb index 363f553..0e43bcb 100644 --- a/content/AWS/02_intro_to_compute.ipynb +++ b/content/AWS/02_intro_to_compute.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "dc57021c", + "id": "99642f5e", "metadata": {}, "source": [ "# Introduction to Elastic Cloud Compute (EC2) - Part 1\n", @@ -30,7 +30,7 @@ }, { "cell_type": "markdown", - "id": "502f2360", + "id": "fbcbf455", "metadata": {}, "source": [ "Recall that the two fundamental components of cloud computing is compute and storage. On AWS, a \"virtual server\" or \"virtual computer\" is known as an **Elastic Cloud Compute (EC2) instance**; sometimes it's called \"EC2\", sometimes it's called an \"instance\" to denote that the ability to build and terminate this server instantaneously, but they all mean the same thing. An EC2 instance is no different from a server that sits under your desk, or your local departmental cluster, or even your local HPC cluster. You even boot up an EC2 instance through the web console, install software and then shut down your instance just like you would a real computer, except that Amazon takes care of the physical machinery while you are in charge of process of creating the computer. In some sense, you can think of utilizing an EC2 instance as renting a server or computer from Amazon! \n", @@ -43,10 +43,10 @@ }, { "cell_type": "markdown", - "id": "bc5d082d", + "id": "17329430", "metadata": {}, "source": [ - "We begin with the AWS console again. Under the \"Build a Solution\" panel, select **Launch a Virtual Machine**\n", + "We begin with the AWS console again. Under the \"Build a Solution\" panel, select `Launch a Virtual Machine`\n", "\n", "![Start page for the AWS console](images/console_ec2.png)\n", "\n", @@ -70,64 +70,61 @@ }, { "cell_type": "markdown", - "id": "b9809503", + "id": "efcd0d76", "metadata": {}, "source": [ - "## Select an AMI (Step 1)\n", + "## 1. Select an AMI\n", "\n", "An Amazon Machine Image (AMI) is a template that Amazon uses to describe the operating system, disk type and all the software configuration that is needed to make sure a computer runs smoothly. Imagine that you are purchasing a new laptop; fresh out of the box, the laptop is pre-configured with an operating system (e.g. Windows, Mac OS, Ubuntu etc.), configuration files that tells the laptop what peripherals are attached, and pre-installed software like Adobe PDF reader. An AMI contains all this information so that your EC2 instance runs exactly like it would a new laptop out of the box! There is much more to learn about AMIs and how they can used for collaboration and data sharing but that is not within the scope of CLASS Essentials. \n", "\n", "As you scroll through the AMI list (Figure 2) you will notice that the list contains offerings from various vendors (e.g. Amazon, RedHat, Windows, etc.). We will be choosing the Ubuntu operating system for flexibility and versatility (can be used outside of the AWS ecosystem). \n", "\n", - "To list all the Free Tier AMIs, check the box on the right that says **Free tier only**.\n", + "To list all the Free Tier AMIs, check the box on the right that says ```Free tier only```.\n", "\n", - "![ec2-ami](images/ec2-ami.png)\n", - "Step 1 - Select an AMI - Free Tier Only\n", + "\n", "\n", - "Scroll to `Ubuntu Server 20.04 LTS(HVM), SSD Volume Type` Select `64-bit(x86)`. \n", + "
Figure 2: Step 1 - Select an AMI - Free Tier Only

\n", "\n", - "![ec2-ubuntu](images/ec2-ubuntu.png)\n", - "Step 1 - Select an AMI - Operating System Selection" + "Scroll to ```Ubuntu Server 20.04 LTS(HVM), SSD Volume Type``` (Figure 3). Select ```64-bit(x86)```. \n", + "\n", + "\n", + "\n", + "
Figure 3: Step 1 - Select an AMI - Operating System Selection

" ] }, { "cell_type": "markdown", - "id": "9c658eed-3cf6-4f46-bde6-f3b481874013", - "metadata": { - "tags": [] - }, + "id": "20372414", + "metadata": {}, "source": [ - "## Choose an Instance Type (Step 2)\n", + "## Step 2: Choose an Instance Type\n", "\n", "Choosing an instance type is choosing the hardware for your computing system: you get to pick the number of CPUs and memory size for your instance. \n", "\n", "Instance types are group by [**families**](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html) and denotes whether, for example, an instance is optimized for batch processing (compute-optimized, C-family), optimized for databases (memory-optimized, R-family) or has accelerated hardware (GPUs) for AI or Machine Learning pipelines. \n", "\n", - "When you choose an Instance Type (below), the screen show additional information about the selected instance type including the number of CPUs, the memory size, the type of storage and information about networking. \n", - "![ec2-ubuntu](images/ec2-ubuntu.png)\n", + "When you choose an Instance Type (Figure 3), the screen show additional information about the selected instance type including the number of CPUs, the memory size, the type of storage and information about networking. \n", "\n", "In the Instance Storage (GB) column, you will notice a term called **EBS**. EBS is the acronym for **Elastic Block Storage** and is analogous to the hard disk or boot drive on your personal computer or laptop. More details about EBS and different kinds of disk storage on EC2 instances are beyond the scope of CLASS Essentials. \n", "\n", "```{admonition} Note\n", ":class: note\n", - "The four most common types of storage you will encounter on AWS are: Elastic Block Storage (EBS), Elastic File Storage (EFS), Simple Storage Service (s3) and s3 Glacier. In the simplest terms, EBS is analogous to a computer hard drive and EFS is analogous to a network file system (NFS) or shared file system. s3 is AWS's object storage which is discussed [here](03_intro_to_cloud_storage). s3 Glacier is a cost-effective way of storing s3 files that you do not need to access frequently. \n", + "The four most common types of storage you will encounter on AWS are: Elastic Block Storage (EBS), Elastic File Storage (EFS), Simple Storage Service (s3) and s3 Glacier. In the simplest terms, EBS is analogous to a computer hard drive and EFS is analogous to a network file system (NFS) or shared file system. s3 is AWS's object storage which is discussed [here](05_intro_to_cloud_storage). s3 Glacier is a cost-effective way of storing s3 files that you do not need to access frequently. \n", "```\n", "\n", - "Here will will select a `t2.micro` instance which is Free Tier Eligible but only has 1vCPU and 1 GiB of memory. The cost of running a **t2.micro** instance is, at the time of publication, as follows: \n", - "\n", - "![Choose an Instance Type](images/ec2-type.png)\n", - "\n", - "Select **Next: Configure Instance Details**." + "Here will will select a ```t2.micro``` instance which is Free Tier Eligible but only has 1vCPU and 1 GiB of memory. The cost of running a t2.micro instance is " ] }, { "cell_type": "markdown", - "id": "285aad80-5670-4bdf-b9c3-86439720e9e1", - "metadata": { - "tags": [] - }, + "id": "ca6a7991", + "metadata": {}, "source": [ - "## Configure Instance Details (Step 3)\n", + "![Choose an Instance Type](images/ec2-type.png)\n", + "\n", + "Select ```Next: Configure Instance Details```.\n", + "\n", + "## Step 3: Configure Instance Details\n", "Step 3 in creating an EC2 instance involves a rudimentary understanding of several key AWS and cloud jargon (Figure 4). While delving deeper into some of the terminology is outside of the scope of CLASS Essentials, we go will through these terms in brief as we learn how to configure your EC2 instance. For the most part, we will **leave the settings as default**. CLASS Intermediate offers a more in depth discussion on cloud concepts. \n", "\n", "![Configure Instance Details](images/ec2-configure.png)\n", @@ -136,7 +133,7 @@ "\n", "```{admonition} Note\n", ":class: note\n", - "Recall that we learned about regions in the [previous chapter](01_intro_to_cloud_console). \n", + "Recall that we learned about regions in the [previous chapter](./01_intro_to_cloud_console). \n", "```\n", "\n", "**Purchasing Options** : Throughout your AWS journey, you will hear the term **Spot Instances**. Spot instances make use of the servers that go unused in AWS data centers to minimize costs. Recall that AWS has many data centers spread across the globe and not all their servers are utilized at 100% capacity at all times. Amazon uses Spot Instances as a flexible way to profit from extra capacity. Users have access to Spot Instances through a bidding process, sometimes users can save up to 90% off the on-deman compute instance this way! We will not expand much more on Spot Instances in CLASS Essentials but if you are interested, I2's CLASS Intermediate talks more " @@ -144,7 +141,7 @@ }, { "cell_type": "markdown", - "id": "9c874bad", + "id": "97271ddd", "metadata": {}, "source": [ "```{admonition} Exercise\n", @@ -157,87 +154,72 @@ }, { "cell_type": "markdown", - "id": "b36239c2-a76b-46e7-a495-d6da6e4598a4", + "id": "32d19e24", "metadata": {}, - "source": [ - "## Review Progress\n", - "\n", - "In the previous steps we learned how to launch a virtual machine from the AWS console. We selected an Amazon Machine Image (AMI), Chose an Instance and Configured Launch Settings. Recall that there are 7 steps to walk through to create a new EC2 instance; we will go through each in detail: \n", - "\n", - "1. Select an AMI\n", - "2. Choose Instance Type\n", - "3. Configure Instance \n", - "4. Add Storage\n", - "5. Add Tags\n", - "6. Configure Security Group\n", - "7. Review/Launch" - ] + "source": [] }, { "cell_type": "markdown", - "id": "07b8ec13-c056-4236-ac14-12926bd7f872", + "id": "7820264f", "metadata": {}, "source": [ - "## Add Storage (Step 4)\n", + "## Step 4. Add Storage\n", "\n", "Storage on an EC2 instance is akin to a hard drive. Here we will leave the default settings but it is important to know that a hard drive on an EC2 instance is known as [Elastic Block Storage](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html). EBS volumes behave like raw, unformatted block devices.\n", "\n", - "![ec2-storage](images/ec2-storage.png)" + "" ] }, { "cell_type": "markdown", - "id": "a29c2a86-c864-493c-9e3f-0b4cced47dd9", + "id": "309bdc6d", "metadata": {}, "source": [ - "## Add Tags (Step 5)\n", + "## Step 5: Add Tags\n", "\n", "[Tags](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) are labels that you assign to an AWS resource. Each tag consists of a key and an optional value, both of which you define.Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. e\n", "\n", - "![ec2-tags](images/ec2-tags.png)" + "\n" ] }, { "cell_type": "markdown", - "id": "45d695b3-c39b-4889-af48-a0bfb901cb32", + "id": "2835744d", "metadata": {}, "source": [ - "## Step 6: Configure Security Group (Step 6)\n", + "## Step 6: Configure Security Group\n", "\n", "A [security group](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) acts as a virtual firewall for your EC2 instances to control inbound and outbound traffic. Again, we will not delve too much into networking protocols in CLASS Essentials. We will leave the default values of opening port 22 so that we can securely log in to the EC2 instance that we create. \n", "\n", - "![ec2-sg](images/ec2-sg.png)" + "" ] }, { "cell_type": "markdown", - "id": "5334a614-fb7f-409f-8aba-d1f18c92bc60", + "id": "56318ce3", "metadata": {}, "source": [ "## Step 7: Review Instance Launch\n", "\n", - "Next review the details and click on **Launch**\n", - "![ec2-launch](images/ec2-launch.png)\n", + "\n", "\n", - "You will be prompted to generate a ssh-key to access the virtual machine. For now we will create a new one by entering `essentials-aws` in the key name and clicking **Download**\n", - "![ec2-sshkey](images/ec2-sshkey.png)\n", + "\n", "\n", - "You shoudl now see the following message\n", - "![ec2-confirm](images/ec2-confirm.png)\n", - "\n", - "Congratulations, you have created a virtual machine.\n", - "\n", - "```{admonition} Caution\n", - ":class: caution\n", - "\n", - "We will delete this virtual machine later. If you do not, **you will be charged for the running machine**.\n", - "```" + "" ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0a851856", + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, @@ -251,7 +233,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.5" + "version": "3.10.2" } }, "nbformat": 4, diff --git a/content/AWS/02_intro_to_compute_part1.ipynb b/content/AWS/02_intro_to_compute_part1.ipynb deleted file mode 100644 index 07c12d4..0000000 --- a/content/AWS/02_intro_to_compute_part1.ipynb +++ /dev/null @@ -1,241 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "id": "dc57021c", - "metadata": {}, - "source": [ - "# Introduction to Elastic Cloud Compute (EC2) - Part 1\n", - "\n", - "\n", - "```{admonition} Overview\n", - ":class: tip\n", - "\n", - "**Teaching:** 45 mins\n", - "\n", - "**Exercises:** 10 mins\n", - "\n", - "**Questions:**\n", - "* What is an EC2 instance?\n", - "* When would I use an EC2 instance?\n", - "* How do I launch an EC2 instance?\n", - "\n", - "**Objectives:**\n", - "* Understand the concept of virtual servers.\n", - "* Understand what an Elastic Cloud Compute (EC2) instance is.\n", - "* Understand how to launch an EC2 instance. \n", - "\n", - "```" - ] - }, - { - "cell_type": "markdown", - "id": "502f2360", - "metadata": {}, - "source": [ - "Recall that the two fundamental components of cloud computing is compute and storage. On AWS, a \"virtual server\" or \"virtual computer\" is known as an **Elastic Cloud Compute (EC2) instance**; sometimes it's called \"EC2\", sometimes it's called an \"instance\" to denote that the ability to build and terminate this server instantaneously, but they all mean the same thing. An EC2 instance is no different from a server that sits under your desk, or your local departmental cluster, or even your local HPC cluster. You even boot up an EC2 instance through the web console, install software and then shut down your instance just like you would a real computer, except that Amazon takes care of the physical machinery while you are in charge of process of creating the computer. In some sense, you can think of utilizing an EC2 instance as renting a server or computer from Amazon! \n", - "\n", - "In cloud jargon, the term **elasticity** denotes the ability to quickly expand or decrease computer processing, memory, and storage resources to meet changing demands. In that way, you can expand the size of your CPU, RAM and disk size on your EC2 instance almost instantenously. Since EC2 forms the backbone of most of AWS's core infrastructure, it is an important part of your cloud journey. \n", - " \n", - "\n", - "Let's walk through some of the steps on getting an EC2 instance up and running. \n" - ] - }, - { - "cell_type": "markdown", - "id": "bc5d082d", - "metadata": {}, - "source": [ - "We begin with the AWS console again. Under the \"Build a Solution\" panel, select `Launch a Virtual Machine`\n", - "\n", - "![Start page for the AWS console](images/console_ec2.png)\n", - "\n", - "This will then lead you through a series of steps to get a **Free Tier** EC2 instance up and running. \n", - "\n", - "```{admonition} Note\n", - ":class: note\n", - "\n", - "AWS Free Tier refers to several of the services that AWS offers to help users gain more hands on experience on the AWS platform without being charged. [Click here](https://aws.amazon.com/free/?all-free-tier.sort-by=item.additionalFields.SortRank&all-free-tier.sort-order=asc&awsf.Free%20Tier%20Types=*all&awsf.Free%20Tier%20Categories=*all) for more info about the AWS Free Tier [external link] . \n", - "```\n", - "\n", - "There are 7 steps to walk through to create a new EC2 instance; we will go through each in detail: \n", - "1. Select an AMI\n", - "2. Choose Instance Type\n", - "3. Configure Instance \n", - "4. Add Storage\n", - "5. Add Tags\n", - "6. Configure Security Group\n", - "7. Review/Launch" - ] - }, - { - "cell_type": "markdown", - "id": "b9809503", - "metadata": {}, - "source": [ - "## 1. Select an AMI\n", - "\n", - "An Amazon Machine Image (AMI) is a template that Amazon uses to describe the operating system, disk type and all the software configuration that is needed to make sure a computer runs smoothly. Imagine that you are purchasing a new laptop; fresh out of the box, the laptop is pre-configured with an operating system (e.g. Windows, Mac OS, Ubuntu etc.), configuration files that tells the laptop what peripherals are attached, and pre-installed software like Adobe PDF reader. An AMI contains all this information so that your EC2 instance runs exactly like it would a new laptop out of the box! There is much more to learn about AMIs and how they can used for collaboration and data sharing but that is not within the scope of CLASS Essentials. \n", - "\n", - "As you scroll through the AMI list (Figure 2) you will notice that the list contains offerings from various vendors (e.g. Amazon, RedHat, Windows, etc.). We will be choosing the Ubuntu operating system for flexibility and versatility (can be used outside of the AWS ecosystem). \n", - "\n", - "To list all the Free Tier AMIs, check the box on the right that says ```Free tier only```.\n", - "\n", - "\n", - "\n", - "
Figure 2: Step 1 - Select an AMI - Free Tier Only

\n", - "\n", - "Scroll to ```Ubuntu Server 20.04 LTS(HVM), SSD Volume Type``` (Figure 3). Select ```64-bit(x86)```. \n", - "\n", - "\n", - "\n", - "
Figure 3: Step 1 - Select an AMI - Operating System Selection

" - ] - }, - { - "cell_type": "markdown", - "id": "17597535", - "metadata": {}, - "source": [ - "## Step 2: Choose an Instance Type\n", - "\n", - "Choosing an instance type is choosing the hardware for your computing system: you get to pick the number of CPUs and memory size for your instance. \n", - "\n", - "Instance types are group by [**families**](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html) and denotes whether, for example, an instance is optimized for batch processing (compute-optimized, C-family), optimized for databases (memory-optimized, R-family) or has accelerated hardware (GPUs) for AI or Machine Learning pipelines. \n", - "\n", - "When you choose an Instance Type (Figure 3), the screen show additional information about the selected instance type including the number of CPUs, the memory size, the type of storage and information about networking. \n", - "\n", - "In the Instance Storage (GB) column, you will notice a term called **EBS**. EBS is the acronym for **Elastic Block Storage** and is analogous to the hard disk or boot drive on your personal computer or laptop. More details about EBS and different kinds of disk storage on EC2 instances are beyond the scope of CLASS Essentials. \n", - "\n", - "```{admonition} Note\n", - ":class: note\n", - "The four most common types of storage you will encounter on AWS are: Elastic Block Storage (EBS), Elastic File Storage (EFS), Simple Storage Service (s3) and s3 Glacier. In the simplest terms, EBS is analogous to a computer hard drive and EFS is analogous to a network file system (NFS) or shared file system. s3 is AWS's object storage which is discussed [here](05_intro_to_cloud_storage). s3 Glacier is a cost-effective way of storing s3 files that you do not need to access frequently. \n", - "```\n", - "\n", - "Here will will select a ```t2.micro``` instance which is Free Tier Eligible but only has 1vCPU and 1 GiB of memory. The cost of running a t2.micro instance is " - ] - }, - { - "cell_type": "markdown", - "id": "4ee655a3", - "metadata": {}, - "source": [ - "![Choose an Instance Type](images/ec2-type.png)\n", - "\n", - "Select ```Next: Configure Instance Details```.\n", - "\n", - "## Step 3: Configure Instance Details\n", - "Step 3 in creating an EC2 instance involves a rudimentary understanding of several key AWS and cloud jargon (Figure 4). While delving deeper into some of the terminology is outside of the scope of CLASS Essentials, we go will through these terms in brief as we learn how to configure your EC2 instance. For the most part, we will **leave the settings as default**. CLASS Intermediate offers a more in depth discussion on cloud concepts. \n", - "\n", - "![Configure Instance Details](images/ec2-configure.png)\n", - "\n", - "**Number of instances** : This indicates how many instances you want to create at the same time. Here, we will leave the value as '1' but in actuality, you can can have up to 20 instances per region. \n", - "\n", - "```{admonition} Note\n", - ":class: note\n", - "Recall that we learned about regions in the [previous chapter](./01_intro_to_cloud_console). \n", - "```\n", - "\n", - "**Purchasing Options** : Throughout your AWS journey, you will hear the term **Spot Instances**. Spot instances make use of the servers that go unused in AWS data centers to minimize costs. Recall that AWS has many data centers spread across the globe and not all their servers are utilized at 100% capacity at all times. Amazon uses Spot Instances as a flexible way to profit from extra capacity. Users have access to Spot Instances through a bidding process, sometimes users can save up to 90% off the on-deman compute instance this way! We will not expand much more on Spot Instances in CLASS Essentials but if you are interested, I2's CLASS Intermediate talks more " - ] - }, - { - "cell_type": "markdown", - "id": "9c874bad", - "metadata": {}, - "source": [ - "```{admonition} Exercise\n", - ":class: attention\n", - "\n", - "* What kind of information is contained in an AMI? \n", - "* How do Spot Instances help you optimize costs?\n", - "````" - ] - }, - { - "cell_type": "markdown", - "id": "1b8642f4", - "metadata": {}, - "source": [] - }, - { - "cell_type": "markdown", - "id": "ce945b0d", - "metadata": {}, - "source": [ - "## Step 4. Add Storage\n", - "\n", - "Storage on an EC2 instance is akin to a hard drive. Here we will leave the default settings but it is important to know that a hard drive on an EC2 instance is known as [Elastic Block Storage](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html). EBS volumes behave like raw, unformatted block devices.\n", - "\n", - "" - ] - }, - { - "cell_type": "markdown", - "id": "55f4f9af", - "metadata": {}, - "source": [ - "## Step 5: Add Tags\n", - "\n", - "[Tags](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) are labels that you assign to an AWS resource. Each tag consists of a key and an optional value, both of which you define.Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. e\n", - "\n", - "\n" - ] - }, - { - "cell_type": "markdown", - "id": "bd0b7892", - "metadata": {}, - "source": [ - "## Step 6: Configure Security Group\n", - "\n", - "A [security group](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) acts as a virtual firewall for your EC2 instances to control inbound and outbound traffic. Again, we will not delve too much into networking protocols in CLASS Essentials. We will leave the default values of opening port 22 so that we can securely log in to the EC2 instance that we create. \n", - "\n", - "" - ] - }, - { - "cell_type": "markdown", - "id": "d4b441c6", - "metadata": {}, - "source": [ - "## Step 7: Review Instance Launch\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "1e918e55", - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.9.5" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -} diff --git a/content/AWS/03_intro_to_cloud_storage.ipynb b/content/AWS/03_intro_to_cloud_storage.ipynb index cbb04cf..6bd8643 100644 --- a/content/AWS/03_intro_to_cloud_storage.ipynb +++ b/content/AWS/03_intro_to_cloud_storage.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "dc57021c", + "id": "458bf7cc", "metadata": {}, "source": [ "# Introduction to Cloud Storage\n", @@ -29,7 +29,7 @@ }, { "cell_type": "markdown", - "id": "338a2a01", + "id": "9a0023d6", "metadata": {}, "source": [ "There are three types of [cloud data storage](https://aws.amazon.com/what-is-cloud-storage/): object storage, file storage, and block storage. In this module, we will focus on object storage (e.g. Amazon Simple Storage Service (S3)). Object storage is a technology that manages data as objects. All data is stored in one large repository which may be distributed across multiple physical storage devices, instead of being divided into files or folders.\n", @@ -44,7 +44,7 @@ }, { "cell_type": "markdown", - "id": "177860a7", + "id": "e347eb69", "metadata": {}, "source": [ "Here we will click into the s3 service page. Note that the region here is Global. s3 namespaces(the name of the buckets) are global. This means that no two buckets can have identical names even if they reside in a different regions. \n", @@ -56,7 +56,7 @@ }, { "cell_type": "markdown", - "id": "9e0f64bf", + "id": "3228bf08", "metadata": {}, "source": [ "This will bring you to the Create Bucket page. Here we will choose a name for our new bucket - it will need to be a unique global namespace. Here I will use my identifying IAM (user1783892) to create a bucket. We will name my bucket ```bucket-user1783892```, leave the region as us-east-1 as well as all the default settings and click ```Create Bucket```\n", @@ -66,7 +66,7 @@ }, { "cell_type": "markdown", - "id": "8c3eca82", + "id": "1504f939", "metadata": {}, "source": [ "When your bucket is successfully created, you will see it pop up in the s3 console. \n", @@ -76,7 +76,7 @@ }, { "cell_type": "markdown", - "id": "4a8c60bb", + "id": "99e53e32", "metadata": {}, "source": [ "In the next lesson, we will learn about the AWS CLI and how we can use that to manipulate both the EC2 and s3 bucket we have created. " @@ -85,7 +85,7 @@ { "cell_type": "code", "execution_count": null, - "id": "136d902c", + "id": "839aede6", "metadata": {}, "outputs": [], "source": [] @@ -93,7 +93,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, @@ -107,7 +107,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.5" + "version": "3.10.2" } }, "nbformat": 4, diff --git a/content/AWS/04_intro_to_cli.ipynb b/content/AWS/04_intro_to_cli.ipynb index 25635c1..8347c83 100644 --- a/content/AWS/04_intro_to_cli.ipynb +++ b/content/AWS/04_intro_to_cli.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "dc57021c", + "id": "b0cf1425", "metadata": {}, "source": [ "# Introduction to the AWS CLI\n", @@ -11,9 +11,9 @@ "```{admonition} Overview\n", ":class: tip\n", "\n", - "**Teaching: 45 mins**\n", + "**Teaching:** 30 mins\n", "\n", - "**Exercises: 10 mins**\n", + "**Exercises:** 10 mins\n", "\n", "**Questions:**\n", "* How do I use the AWS CLI?\n", @@ -26,7 +26,7 @@ }, { "cell_type": "markdown", - "id": "8e8dd80c", + "id": "a25aac4e", "metadata": {}, "source": [ "# The AWS CloudShell\n", @@ -52,7 +52,7 @@ }, { "cell_type": "markdown", - "id": "dd9503b3", + "id": "5ca7a3af", "metadata": {}, "source": [ "# AWS CLI and s3 buckets\n", @@ -126,45 +126,10 @@ "In the next episode, you will learn how to utilize the AWS CLI within an EC2 instance and use that to create a fun research workflow!" ] }, - { - "cell_type": "markdown", - "id": "9ed2bb8f", - "metadata": {}, - "source": [ - "# AWS CLI and EC2\n", - "\n", - "Now we will exit the CloudShell and explore a different way to access your virtual machines with the AWS CLI. \n", - "\n", - "Let's navigate back to the EC2 console. We can do this by navigating to the bento menu icon at the top of the navigation bar. \n", - "\n", - "\n" - ] - }, - { - "cell_type": "markdown", - "id": "67225c9c", - "metadata": {}, - "source": [ - "Once you are on the EC2 console, select your instance by clicking on the checkbox. The `Connect` button is at the top of the screen. Click on the connect button. \n", - "\n", - "\n", - "\n", - "That should bring you to a page that looks something like this: \n", - "\n", - "\n", - "\n", - "When you click connect, you will be connected to your EC2 instance via a secure shell tunnel!\n", - "\n", - "\n", - "```{admonition} Note\n", - "SSH or Secure Shell is a network communication protocol that enables two computers to communicate\n", - "```\n" - ] - }, { "cell_type": "code", "execution_count": null, - "id": "415774fe", + "id": "47313c4f", "metadata": {}, "outputs": [], "source": [] @@ -172,7 +137,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, @@ -186,7 +151,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.5" + "version": "3.10.2" } }, "nbformat": 4, diff --git a/content/AWS/05_running_analysis.ipynb b/content/AWS/05_running_analysis.ipynb index dc901b2..9427a42 100644 --- a/content/AWS/05_running_analysis.ipynb +++ b/content/AWS/05_running_analysis.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "dc57021c", + "id": "7deb7a74", "metadata": {}, "source": [ "# Putting It All Together\n", @@ -11,20 +11,21 @@ "```{admonition} Overview\n", ":class: tip\n", "\n", - "**Teaching:**\n", + "**Teaching:** 45 mins\n", "\n", - "**Exercises:**\n", + "**Exercises:** 10 mins\n", "\n", - "**Questions:**\n", + "**Questions:** \n", "\n", - "**Objectives:**\n", "\n", + "**Objectives:**\n", + "Create a simple research workflow using AWS EC2, AWS s3 and the AWS CLI\n", "```" ] }, { "cell_type": "markdown", - "id": "67ef6891", + "id": "f9d82db5", "metadata": {}, "source": [ "# AWS CLI and EC2\n", @@ -47,7 +48,7 @@ "\n", "\n", "```{admonition} Note\n", - "SSH or Secure Shell is a network communication protocol that enables two computers to communicate\n", + "SSH or Secure Shell is a network communication protocol that enables two computers to communicate and share data.\n", "```\n", "\n", "Your virtual machine does not come pre-loaded with bells and whistles, so the first order business is making sure that it has the tools it needs for us to do some fun stuff. We will get the necessary updates and upgrades for the operating system. \n", @@ -58,14 +59,278 @@ "\n", "You will be prompted to install a whole slew of packages. Go ahead and press `Y` if prompted. This process may take a few minutes. \n", "\n", + "Next, we will install the aws cli package. Again, you will be prompted for installation. Press `Y` when prompted. \n", + "\n", + "```bash\n", + "sudo apt install awscli \n", + "```\n", + "\n", + "Once the installation is complete, we can start linking together everything we have learn from the last few episodes. Recall that Drew needs to learn how to retrieve data from a cloud bucket, store the data and run some analysis on it. " + ] + }, + { + "cell_type": "markdown", + "id": "0992a27a", + "metadata": {}, + "source": [ + "# Get example code onto EC2 instance\n", + "\n", + "Recall again that Drew needs to run some analysis on the dataset. We will first attempt to download our code to our EC2 instance. We will be downloading the code from a repository hosted in the cloud using the `git` command. \n", + "\n", + "```{admonition} Note\n", + "Git is a version control system that lets you manage and keep track of your source code history. GitHub is a cloud-based hosting service that lets you manage Git repositories. \n", + "```\n", + "\n", + "First check that `git` is installed:\n", + "\n", + "```bash\n", + "git --version\n", + "\n", + "```\n", + "> git version 2.25.1\n", + "\n", + "Now we'll use git to \"clone\" a repository (i.e. copy the repository) from Github: \n", + "\n", + "```bash\n", + "git clone https://github.internet2.edu/CLASS/CLASS-Examples.git\n", + "```\n", + "> Cloning into 'CLASS-Examples'... \n", + "remote: Enumerating objects: 66, done. \n", + "remote: Total 66 (delta 0), reused 0 (delta 0), pack-reused 66 \n", + "Unpacking objects: 100% (66/66), 9.44 KiB | 508.00 KiB/s, done.\n", + "\n", + "We now change the current directory to the Landsat directory in the CLASS-Examples directory that was just created by the previous git command and list the contents of the directory.\n", + "\n", + "```bash\n", + "cd ~/CLASS-Examples/aws-landsat/ | ls -l\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "ca96288c", + "metadata": {}, + "source": [ + "# Public s3 buckets \n", + "\n", + "The data that Drew wants to work with is from the Landsat 8 satellite. The Landsat series of satellites has produced the longest, continuous record of Earth’s land surface as seen from space.The bucket that Drew wants to obtain data from is part of the [AWS Open Data program](https://registry.opendata.aws/). The Registry of Open Data on AWS makes it easy to find datasets made publicly available through AWS services. Drew already knows that the data exists in a public s3 bucket at `s3://landsat-pds`. Public in this case means that anyone can freely download data from this bucket. \n", + "\n", + "Let's look at what is stored in the `s3://landsat-pds` bucket. \n", + "\n", + "```bash\n", + "aws s3 ls s3://landsat-pds\n", + "```\n", + "\n", + "You should see a list of files and folders that are hosted on the `s3://landsat-pds`bucket. More information about this bucket and its related files and folders can be found here: https://docs.opendata.aws/landsat-pds/readme.html.\n", + "\n", + "We can also list the contents of a **folder** in the bucket. \n", + "\n", + "```bash\n", + "aws s3 ls s3://landsat-pds/c1/\n", + "```\n", + "> PRE L8/\n", + "\n", + "You will now notice that there is A LOT of data in this bucket. In fact, a single Landsat8 scene is about 1 Gb in size since it contains a large array of data for each imagery band and there is almost a Petabyte of data in this bucket and and growing! \n", + "\n", + "```{admonition} Note\n", + "Downloading data from one bucket to another is not a recommended practice when working on the cloud. Ideally, you would develop a workflow that allows you to bring your compute to the cloud instead of transferring data. Several new data formats like [Cloud-Optimized Geotiffs (COGs)](https://www.cogeo.org/) allow you to work directly with cloud-hosted data instead of having to download data. \n", + "```\n", + "\n", + "We will test how to extract the data one for one Landsat8 image (scene). The area that our resident scientist, Drew is interested in is located in the Sierra Nevada mountains. From this converter: https://landsat.usgs.gov/landsat_acq#convertPathRow, Drew has determined that he would like to work with a scene from path 42, row 34 or latitute 37.478, longitude -119.048. He would also like to with the Landsat8 Collection 1 data, Tier 1 data for the dates of June 16 - June 29, 2017 due to low cloud cover for this dataset. \n", + "\n", + "Let's list the files that are in the s3 bucket that contains all these parameters. Each of these files contains an image for a particular spectral band. : \n", + "\n", + "```bash\n", + "aws s3 ls s3://landsat-pds/c1/L8/042/034/LC08_L1TP_042034_20170616_20170629_01_T1/\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "45b0304b", + "metadata": {}, + "source": [ + "# Running an Analysis\n", + "\n", + "We will now use the `process_sat.py` script to open the files in this s3 bucket and run some analysis using an open-source package called [`rasterio`](https://rasterio.readthedocs.io/en/latest/). \n", + "\n", + "Let's first install the package:\n", + "\n", + "```bash\n", + "sudo apt-get install python3-rasterio --yes\n", + "```\n", "\n", - "\n" + "After installation, we can check our working directory and list the directory content. Make sure you are in the folder `~/CLASS-Examples/aws-landsat/`\n", + "\n", + "```bash\n", + "pwd\n", + "```\n", + "> /home/ubuntu/CLASS-Examples/aws-landsat/\n", + "\n", + "Let's take a look at `process_sat.py`\n", + "\n", + "```bash\n", + "cat process_sat.py\n", + "```\n", + "```python\n", + "#!/usr/bin/python3\n", + "import os\n", + "import rasterio\n", + "import numpy as np\n", + "\n", + "print('Landsat on AWS:')\n", + "filepath = 'https://landsat-pds.s3.amazonaws.com/c1/L8/042/034/LC08_L1TP_042034_20170616_20170629_01_T1/LC08_L1TP_042034_20170616_20170629_01_T1_B4.TIF'\n", + "with rasterio.open(filepath) as src:\n", + " print(src.profile)\n", + "\n", + "with rasterio.open(filepath) as src:\n", + " oviews = src.overviews(1) # list of overviews from biggest to smallest\n", + " oview = oviews[-1] # let's look at the smallest thumbnail\n", + " print('Decimation factor= {}'.format(oview))\n", + " # NOTE this is using a 'decimated read' (http://rasterio.readthedocs.io/en/latest/topics/resampling.html)\n", + " thumbnail = src.read(1, out_shape=(1, int(src.height // oview), int(src.width // oview)))\n", + "\n", + "print('array type: ',type(thumbnail))\n", + "print(thumbnail)\n", + "\n", + "date = '2017-06-16'\n", + "url = 'https://landsat-pds.s3.amazonaws.com/c1/L8/042/034/LC08_L1TP_042034_20170616_20170629_01_T1/'\n", + "redband = 'LC08_L1TP_042034_20170616_20170629_01_T1_B{}.TIF'.format(4)\n", + "nirband = 'LC08_L1TP_042034_20170616_20170629_01_T1_B{}.TIF'.format(5)\n", + "\n", + "with rasterio.open(url+redband) as src:\n", + " profile = src.profile\n", + " oviews = src.overviews(1) # list of overviews from biggest to smallest\n", + " oview = oviews[1] # Use second-highest resolution overview\n", + " print('Decimation factor= {}'.format(oview))\n", + " red = src.read(1, out_shape=(1, int(src.height // oview), int(src.width // oview)))\n", + " print(red)\n", + " \n", + "with rasterio.open(url+nirband) as src:\n", + " oviews = src.overviews(1) # list of overviews from biggest to smallest\n", + " oview = oviews[1] # Use second-highest resolution overview\n", + " nir = src.read(1, out_shape=(1, int(src.height // oview), int(src.width // oview)))\n", + " print(nir)\n", + "\n", + "def calc_ndvi(nir,red):\n", + " '''Calculate NDVI from integer arrays'''\n", + " nir = nir.astype('f4')\n", + " red = red.astype('f4')\n", + " ndvi = (nir - red) / (nir + red)\n", + " return ndvi\n", + "\n", + "np.seterr(invalid='ignore')\n", + "ndvi = calc_ndvi(nir,red)\n", + "print(ndvi)\n", + "\n", + "localname = 'LC08_L1TP_042034_20170616_20170629_01_T1_NDVI_OVIEW.tif'\n", + "\n", + "with rasterio.open(url+nirband) as src:\n", + " profile = src.profile.copy()\n", + "\n", + " aff = src.transform\n", + " newaff = rasterio.Affine(aff.a * oview, aff.b, aff.c,\n", + " aff.d, aff.e * oview, aff.f)\n", + " profile.update({\n", + " 'dtype': 'float32',\n", + " 'height': ndvi.shape[0],\n", + " 'width': ndvi.shape[1],\n", + " 'transform': newaff}) \n", + "\n", + " with rasterio.open(localname, 'w', **profile) as dst:\n", + " dst.write_band(1, ndvi)\n", + " \n", + " ```" ] + }, + { + "cell_type": "markdown", + "id": "ab907e17", + "metadata": {}, + "source": [ + "We see that in 'process_sat.py' we will be opening the Red and Near Infrared band overview files using rasterio, then calcuting the Normalized Difference Vegetation Index (NDVI) which is a measure of changes in vegetation or landcover. \n", + "\n", + "Let's run `process_sat.py`:\n", + "\n", + "```bash \n", + "./process_sat.py\n", + "```\n", + "\n", + "> Landsat on AWS:\n", + "{'driver': 'GTiff', 'dtype': 'uint16', 'nodata': None, 'width': 7821, 'height': 7951, 'count': 1, 'crs': CRS.from_epsg(32611), 'transform': Affine(30.0, 0.0, 204285.0,\n", + " 0.0, -30.0, 4268115.0), 'blockxsize': 512, 'blockysize': 512, 'tiled': True, 'compress': 'deflate', 'interleave': 'band'}\n", + "Decimation factor= 81\n", + "array type: \n", + "[[0 0 0 ... 0 0 0]\n", + " [0 0 0 ... 0 0 0]\n", + " [0 0 0 ... 0 0 0]\n", + " ...\n", + " [0 0 0 ... 0 0 0]\n", + " [0 0 0 ... 0 0 0]\n", + " [0 0 0 ... 0 0 0]]\n", + "Decimation factor= 9\n", + "[[0 0 0 ... 0 0 0]\n", + " [0 0 0 ... 0 0 0]\n", + " [0 0 0 ... 0 0 0]\n", + " ...\n", + " [0 0 0 ... 0 0 0]\n", + " [0 0 0 ... 0 0 0]\n", + " [0 0 0 ... 0 0 0]]\n", + "[[nan nan nan ... nan nan nan]\n", + " [nan nan nan ... nan nan nan]\n", + " [nan nan nan ... nan nan nan]\n", + " ...\n", + " [nan nan nan ... nan nan nan]\n", + " [nan nan nan ... nan nan nan]\n", + " [nan nan nan ... nan nan nan]]\n" + ] + }, + { + "cell_type": "markdown", + "id": "db5a1d74", + "metadata": {}, + "source": [ + "Let's check if we obtained an image in our directory:\n", + "\n", + "```bash \n", + "ls\n", + "```\n", + "> LC08_L1TP_042034_20170616_20170629_01_T1_NDVI_OVIEW.tif process_sat.py\n", + "\n", + "Now we need to upload this image to our s3 bucket. \n", + "\n", + "```{admonition} Exercise\n", + ":class: attention\n", + "How would you find the name of your s3 bucket and copy the file LC08_L1TP_042034_20170616_20170629_01_T1_NDVI_OVIEW.tif over? \n", + "```\n", + "\n", + "```{dropdown} Answer\n", + "aws s3 ls\n", + "aws s3 cp ./LC08_L1TP_042034_20170616_20170629_01_T1_NDVI_OVIEW.tif s3://bucket-userXXXXXXX/\n", + "\n", + "```\n", + "\n", + "Once we have uploaded the image, we can check if it's in the bucket:\n", + "\n", + "```bash\n", + "aws s3 ls s3://bucket-userXXXXXXX\n", + "```\n", + "> 2022-02-08 05:51:01 1837900 LC08_L1TP_042034_20170616_20170629_01_T1_NDVI_OVIEW.tif \n", + "2022-02-04 06:45:30 26 hemingway.txt" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "426245b5", + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, @@ -79,7 +344,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.5" + "version": "3.10.2" } }, "nbformat": 4, diff --git a/content/AWS/06_monitoring_costs.ipynb b/content/AWS/06_monitoring_costs.ipynb new file mode 100644 index 0000000..e469660 --- /dev/null +++ b/content/AWS/06_monitoring_costs.ipynb @@ -0,0 +1,121 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "ea763c0e", + "metadata": {}, + "source": [ + "# Monitoring Costs\n", + "\n", + "\n", + "```{admonition} Overview\n", + ":class: tip\n", + "\n", + "**Teaching:** 15 mins\n", + "\n", + "**Exercises:** -\n", + "\n", + "**Questions:**\n", + "* How do I find what resources are being used on my account?\n", + "\n", + "**Objectives:**\n", + "* Learn about AWS Tag Editor\n", + "* Learn about Billing Accounts\n", + "* Find information about the Billing Account associated with your project\n", + "```\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "id": "bcf032a4", + "metadata": {}, + "source": [ + "# Ways to monitor resources on your AWS account\n", + "\n", + "There are several ways to monitor what kinds of services that are running on your AWS account. We will focus on two: The AWS Tag Editor and the EC2 global view. \n", + "\n", + "\n", + "The AWS Tag Editor is a offers a way to look at resources that ar running in your account. Recall in Episode 2 that tagging resources is a best practice to manage costs. The AWS Tag Editor finds resources on your account and allows you tag them if you haven't already done so. It is a good tool to list ALL resources that are running on your account, or to filter them using a specific criteria. Advanced use of the AWS Tag Editor is beyond the scope of CLASS Essentials. Here we will use the AWS Tag Editor to list all the resources on our account. \n", + "\n", + "1. Search for `Tag Editor` in the navigation bar of the AWS console\n", + "2. On the sidebar menu, click `Tag Editor`\n", + "3. In the Regions dropdown select `All regions`\n", + "4. In the Resource types dropdown select `All supported resource types`\n", + "5. Click on the Search resources button\n", + "\n", + "A table with the resource search results will be shown at the bottom of the page.\n", + "\n", + "The table displays the following information:\n", + "\n", + "> an identifier for the resource \n", + " the Name tag of the resource (if it has one) \n", + " the service that corresponds to the resource \n", + " the resource type \n", + " the region the resource is provisioned in \n", + " all of the tags on the resource. \n", + " Note that you can click on the badge with the number of tags to display the resource's tags\n", + "\n", + "The AWS EC2 global view is another way to check if you have EC2 instances running in *any* region. \n", + "\n", + "1. Go to the EC2 Dashboard\n", + "2. On the sidebar menu, click `EC2 Global View`\n", + "3. You will see the resource summary and it displays the following information (you may also run into an error message that you can ignore. The error lets you know that you have insufficient administrative privilege to view some of the resources on the account) :\n", + "\n", + "> Enabled Regions \n", + "Instances \n", + "VPCs \n", + "Subnets \n", + "Security groups \n", + "Volumes \n", + "\n", + "You can click any of the links to get more details on the resource being used. " + ] + }, + { + "cell_type": "markdown", + "id": "71c88189", + "metadata": {}, + "source": [ + "# The AWS billing dashboard" + ] + }, + { + "cell_type": "markdown", + "id": "4319f2f6", + "metadata": {}, + "source": [ + "The AWS billing is disabled for AWS Academy. The granularity to which you can view your billing details varies by institution. However, if you are able to view the Billing Dashboard, AWS offers a detailed view of resources and estimated spend. The image below shows an example of the AWS Billing Dashboard. You can get to the Billing Dashboard by clicking your username on the top right corner of the navigation bar. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "fa28c172", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.2" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/content/AWS/08_cleaning_up_resources.ipynb b/content/AWS/07_cleaning_up_resources.ipynb similarity index 62% rename from content/AWS/08_cleaning_up_resources.ipynb rename to content/AWS/07_cleaning_up_resources.ipynb index 665eaa7..5d96ba5 100644 --- a/content/AWS/08_cleaning_up_resources.ipynb +++ b/content/AWS/07_cleaning_up_resources.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "dc57021c", + "id": "e3dd2396", "metadata": {}, "source": [ "# Cleaning Up Resources\n", @@ -11,16 +11,26 @@ "```{admonition} Overview\n", ":class: tip\n", "\n", - "**Teaching:**\n", + "**Teaching:** 15 mins\n", "\n", - "**Exercises:**\n", + "**Exercises:** 5 mins\n", "\n", "**Questions:**\n", + "* How do I clean up resources on my AWS account to minimize cost?\n", + "* What are the best practices to ensure there is no cost overrun?\n", "\n", "**Objectives:**\n", - "\n", + "* Learn to clean up AWS resources \n", "```" ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "40ee5bca", + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { @@ -39,7 +49,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.10" + "version": "3.10.2" } }, "nbformat": 4, diff --git a/content/AWS/07_monitoring_costs.ipynb b/content/AWS/07_monitoring_costs.ipynb deleted file mode 100644 index 5aaf0dd..0000000 --- a/content/AWS/07_monitoring_costs.ipynb +++ /dev/null @@ -1,47 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "id": "dc57021c", - "metadata": {}, - "source": [ - "# Monitoring Costs\n", - "\n", - "\n", - "```{admonition} Overview\n", - ":class: tip\n", - "\n", - "**Teaching:**\n", - "\n", - "**Exercises:**\n", - "\n", - "**Questions:**\n", - "\n", - "**Objectives:**\n", - "\n", - "```" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.7.10" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -} diff --git a/content/_toc.yml b/content/_toc.yml index ed593ab..7f86396 100644 --- a/content/_toc.yml +++ b/content/_toc.yml @@ -14,13 +14,12 @@ parts: - file: AWS/intro_to_AWS_Essentials sections: - file: AWS/01_intro_to_cloud_console - - file: AWS/02_intro_to_compute_part1 - - file: AWS/03_intro_to_compute_part2 - - file: AWS/04_intro_to_cloud_storage - - file: AWS/05_intro_to_cli - - file: AWS/06_running_analysis - - file: AWS/07_monitoring_costs - - file: AWS/08_cleaning_up_resources + - file: AWS/02_intro_to_compute + - file: AWS/03_intro_to_cloud_storage + - file: AWS/04_intro_to_cli + - file: AWS/05_running_analysis + - file: AWS/06_monitoring_costs + - file: AWS/07_cleaning_up_resources - file: Azure/intro_to_Azure_Essentials sections: