diff --git a/Build.md b/Build.md index 2a88a6d..cd47601 100644 --- a/Build.md +++ b/Build.md @@ -29,6 +29,19 @@ To publish to the `gh-pages` of the `origin` (or other) remote run. This copies the contents of `content/_build/html/` to `docs/` so GitHub Pages will render the document. GitHub pages should be set to the `gh-pages` and `docs/` folder. The script pushes a new branch overwriting the old branch and will not track changes. +## VM Development Setup + +Have your ~/.ssh/config source the file the *-create.sh scripts create. This is a one-time per machine setup. + +``` +echo "include ~/.ssh/essentials.config" >> ~/.ssh/config +``` + +Verify your name and email are correct (locally) as they are copied over to the VM +``` +git config user.name +git config user.email +``` ## GCP diff --git a/content/GCP/01_intro_to_cloud_console.ipynb b/content/GCP/01_intro_to_cloud_console.ipynb index d7a5d60..60e21b6 100644 --- a/content/GCP/01_intro_to_cloud_console.ipynb +++ b/content/GCP/01_intro_to_cloud_console.ipynb @@ -12,7 +12,7 @@ "```{admonition} Overview\n", ":class: tip\n", "\n", - "**Teaching:** 10 min.\n", + "**Teaching:** 15 min.\n", "\n", "**Exercises:** 6 min.\n", "\n", @@ -32,7 +32,7 @@ "id": "10be57f6", "metadata": {}, "source": [ - "## The Who, What, and Where of the Cloud\n", + "## The Who, Where, and What of the Cloud\n", "\n", "Before we get started we must first define, and find, a few key pieces of information about your environment that will be used throughout this lesson. This information is also important to ensure that this information is what you expect, otherwise you may use the wrong account or run or store things in the wrong place. It is important to be clear about these terms as they are often different in other cloud providers. We provide a [glossary of GCP terms](glossary.ipynb) for your reference." ] @@ -82,10 +82,10 @@ "source": [ "## Projects\n", "\n", - "Almost everything you will do within Google Cloud Platform must be associated with a **Project**. This is the \"Where\" of the cloud. You must have at least one Project and you can manage multiple Projects within a single (Google) Account. Always make sure you are working in the correct project!\n", + "Almost everything you will do within Google Cloud Platform must be associated with a **Project**. This is the \"**Where**\" of the cloud. You must have at least one Project and you can manage multiple Projects within a single (Google) Account. Always make sure you are working in the correct project!\n", "\n", "To find more information and change project settings\n", - " * Click on the \"Settings and Utilities button (kabob on the top right - see below) and then click on \"Project Settings\"\n", + " * Click on the \"Settings and Utilities button (**kabob** on the top right - see below) and then click on \"Project Settings\"\n", " ![kabob-project](img/kabob-project.png)\n", " * The Project Name is the human friendly description and can be changed.\n", " * The Project ID is set on project creation and cannot be changed. The Project ID is almost always used when specifying a project.\n", @@ -93,7 +93,7 @@ " \n", "For a Project to do anything useful it must also have an enabled Billing Account associated with it. An enabled Billing Account is a prerequisite for this lesson.\n", "\n", - "A Project, just like the word, should be associated with a real world project (for example a research effort or grant, lab, or your Ph.D. Project. etc). For Drew, this is the image processing project. In this way it is easier to track and allocate costs and to manage permissions and access for resources within the project.\n", + "A Project, just like the word, should be associated with a real world project (for example a research effort or grant, lab, or your Ph.D. Project. etc). For Drew, this is the image processing project. In this way it is easier to track and allocate costs and to manage permissions and access for resources within the project. Work in a project should have similar **people**, **lifecycle**, and **funding**.\n", "\n", "The active project is also shown next to the project icon (three hexagons) and clicking it brings up the project selection dialog. The current project has a check mark and the active project can be changed by clicking on the project name or double clicking the row.\n", " ![select-project](img/select-project.png) " @@ -128,7 +128,7 @@ "\n", "The web console is used to control and observe the cloud from the browser. It should only be used for simple and one-time tasks, exploring new services, accessing documentation, or for monitoring and debugging resources in the cloud. Programmatic control through the console, programming languages (for example Python), and other automation tools should be used for day to day activities to make the most out of the cloud and to help with the reproducibility of research and teaching.\n", "\n", - "The Navigation Menu (often called the hamburger) is used to navigate to the various products, which are also sometimes called services.\n", + "The Navigation Menu (often called the **hamburger**) is used to navigate to the various products, which are also sometimes called services.\n", "\n", "![hamburger-navigation](img/hamburger-navigation.png)\n", "\n", @@ -136,7 +136,7 @@ " * You can pin frequently used items on the top of this page by clicking on the pin icon. \n", " * Click the hamburger again to hide the Left Sidebar.\n", "\n", - "All the different products and services are the \"What\" of the cloud. We will start with compute, called *Google Compute Engine* in the next Episode. " + "All the different products and services are the \"**What**\" of the cloud. We will start with compute, called *Google Compute Engine* in the next Episode. " ] }, { @@ -149,9 +149,9 @@ "```{admonition} Exercise\n", "\n", "Take a few moments to navigate a few key services.\n", - " * Navigate to the \"Compute Engine\" service under the \"Compute\" product group.\n", - " * You will probably need to \"Enable\" this service first by clicking on the \"Enable\" button on the \"Compute Engine API\" page. This will only need to be done once per project.\n", - " * Navigate to \"Cloud Storage\" under the \"Storage\" product group and enable the service if necessary.\n", + " * Navigate to \"Cloud Storage\" under the \"Storage\" product group.\n", + " * You will probably need to \"Enable\" this service first by clicking on the \"Enable\" button on the \"Cloud Storage API\" page. This will only need to be done once per project.\n", + " * Navigate to the \"Compute Engine\" service under the \"Compute\" product group and enable the service if necessary.\n", "```" ] }, diff --git a/content/GCP/02_intro_to_compute.ipynb b/content/GCP/02_intro_to_compute.ipynb index fda48cd..f210727 100644 --- a/content/GCP/02_intro_to_compute.ipynb +++ b/content/GCP/02_intro_to_compute.ipynb @@ -12,7 +12,7 @@ "```{admonition} Overview\n", ":class: tip\n", "\n", - "**Teaching:** 45 min.\n", + "**Teaching:** 30 min.\n", "\n", "**Exercises:** 6 min\n", "\n", @@ -67,7 +67,7 @@ "To create a VM Instance we do the following:\n", " * Click **Navigation Menu** -> **Compute Engine** (under Compute) -> **VM Instances** -> **+Create Instance** (just under the blue bar) to open the *Create an instance* page.\n", " * In the **New VM instance** tab on the left (selected by default) configure the *VM instance* as follows:\n", - " * For **Name**, enter a unique name for the instances (example: \"essentials-instance-1\")\n", + " * For **Name**, enter a unique name for the instances (example: \"**essentials-test-1**\")\n", " * For **Region** leave the default or select your \"home\" region. The region is the physical location where your data will reside. Your \"home\" region should be close to your work and should be the region you use most of the time.\n", " * For **Zone** leave the default (note how the name is constructed and that it is a separate data center) some zones have different capabilities.\n", " * In the **Machine configuration** section:\n", @@ -97,7 +97,7 @@ "id": "41c63432-a614-4a1e-9967-f49b68f9069e", "metadata": {}, "source": [ - "## Security\n", + "## More on Security (Optional)\n", "\n", "Everything in the cloud requires permission (authorization). Ordinary we would configure and check security first but in the case of exploring services it is often easier to do things out of order. We noted that the *VM instance* was created with the *Compute Engine default service account*, and if the \"Allow full access to all Cloud Api's\" scope is enable, then everyone on the VM has access to all the resources in your project.\n", "\n", @@ -125,9 +125,9 @@ "tags": [] }, "source": [ - "## Follow the VM Allocation\n", + "## Track VM Instance Creation\n", "\n", - "Just as with security, we will audit (follow) the *VM instance* creation by examining at the project *activity*.\n", + "We can track what is going on in our project by following the *VM instance* creation by examining at the project *activity* page on the project dashboard.\n", "\n", "To view the project activity we do the following:\n", "\n", @@ -147,7 +147,7 @@ "tags": [] }, "source": [ - "## Enumerate the VM Instances\n", + "## Find the VM Instance\n", "\n", "Now lets find and connect to the *VM Instance*.\n", " * Navigate to the Google Compute Engine page by clicking **Navigation Menu** -> **Compute Engine** (under Compute) -> **Instances**.\n", @@ -174,7 +174,7 @@ "\n", "To connect to the *VM instance* we enter the following command in the cloud shell:\n", "```\n", - "gcloud compute ssh essentials-instance-1\n", + "gcloud compute ssh essentials-test-1\n", "```\n", "\n", "If you have not used the cloud shell to connect to a *VM Instance* before you will probably be asked to create a new *ssh key*. The Compute Engine will use this key to allow you to access the *VM instance* in a secure manner. If this is the case you will see a message similar to the following:\n", @@ -223,15 +223,15 @@ "\n", "At this point the command will attempt to connect to the *VM Instance* and will ask the following question:\n", "```\n", - "Did you mean zone [us-central1-a] for instance: [essentials-instance-1] (Y/n)? n\n", + "Did you mean zone [us-central1-a] for instance: [essentials-test-1] (Y/n)? n\n", "```\n", "Answer \"n\".\n", "\n", "The command will now configure the instance to allow your ssh key and connect to it.\n", "\n", "```\n", - "No zone specified. Using zone [us-central1-a] for instance: [essentials-instance-1].\n", - "Updating project ssh metadata...working..Updated [https://www.googleapis.com/compute/v1/projects/class-essentials-instance-1].\n", + "No zone specified. Using zone [us-central1-a] for instance: [essentials-test-1].\n", + "Updating project ssh metadata...working..Updated [https://www.googleapis.com/compute/v1/projects/class-essentials-test-1].\n", "Updating project ssh metadata...done.\n", "Waiting for SSH key to propagate.\n", "Warning: Permanently added 'compute.74517428106645607' (ECDSA) to the list of known hosts.\n", @@ -240,7 +240,7 @@ "\n", "Once connected you will see the machine login banner and prompt similar to the following:\n", "```\n", - "Linux instance-1 4.19.0-17-cloud-amd64 #1 SMP Debian 4.19.194-3 (2021-07-18) x86_64\n", + "Linux essentials-test-1 4.19.0-17-cloud-amd64 #1 SMP Debian 4.19.194-3 (2021-07-18) x86_64\n", "\n", "The programs included with the Debian GNU/Linux system are free software;\n", "the exact distribution terms for each program are described in the\n", @@ -248,7 +248,7 @@ "\n", "Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent\n", "permitted by applicable law.\n", - "learner@essentials-instance-1:~$\n", + "learner@essentials-test-1:~$\n", "```\n", "\n", "Congratulations you have now created a *VM Instance* and connected to it.\n", @@ -288,7 +288,7 @@ "```{admonition} Exercise\n", "\n", "* Try to delete the ***VM instance*** on your own\n", - "* Try to follow the process we just learned (hint: *follow* and *enumerate*)\n", + "* Try to follow the process we just learned (hint: *track* and *list*)\n", "```" ] }, diff --git a/content/GCP/03_intro_to_cloud_storage.ipynb b/content/GCP/03_intro_to_cloud_storage.ipynb index 452861e..a8e3028 100644 --- a/content/GCP/03_intro_to_cloud_storage.ipynb +++ b/content/GCP/03_intro_to_cloud_storage.ipynb @@ -10,7 +10,7 @@ "```{admonition} Overview\n", ":class: tip\n", "\n", - "**Teaching:** 40 min\n", + "**Teaching:** 20 min\n", "\n", "**Exercises:** 5 min\n", "\n", @@ -19,7 +19,6 @@ "\n", "**Objectives:**\n", "* Navigate the Google Cloud Storage service and terminology\n", - "* Understand the roles and permissions needed to use Google Cloud Storage in projects\n", "* Allocate storage in Google Cloud Storage\n", "* Find the cost estimator for Google Cloud Storage\n", "* Recognize that resources have a \"location\"\n", @@ -50,41 +49,6 @@ "We now take Drew through the process of creating a Google Cloud Storage bucket." ] }, - { - "cell_type": "markdown", - "id": "07fb9096-2b40-4995-a742-be7bd9b2797c", - "metadata": { - "tags": [] - }, - "source": [ - "## Security\n", - "\n", - "Everything in the cloud requires permission (authorization). Let's first verify that we have the permissions to create a bucket. A Bucket (a resource) is created within a project and inheres permissions from it.\n", - "\n", - "We are interested in what permissions that *your* account has for *your* project. To do this navigate to the IAM page (**Navigation Menu -> IAM & Admin -> IAM -> Permissions -> View By: Principals**). This shows the permissions for the project.\n", - "\n", - "*Note: There is a powerful filter box to limit the permissions shown.*\n", - "\n", - "You should see a row with your account shown in the Principal column. Here you should see the \"Editor\" Role in the Role column. A *role* is a collection of permissions managed by Google or someone else. The **Editor**, **Owner**, or the **Storage Admin** role for a project will *allow* *you* to create, access, and delete Buckets *in* the project.\n", - "\n", - "There are three important pieces of information that work together to form the **IAM policy**. The permission (role), the identity (principal), and the resource (project). This is another who (identity), what (permission), and where (resource)." - ] - }, - { - "cell_type": "markdown", - "id": "9acf29cf-660b-4922-bcb8-89fd9080fdea", - "metadata": { - "tags": [] - }, - "source": [ - "```{admonition} Exercise\n", - "\n", - "Answer the following questions:\n", - " * What is the \"Who, What, Where\" of the IAM policy that allows you to use your project?\n", - " * What else has permissions to do things in your project and state the \"Who, What, Where\"?\n", - "```" - ] - }, { "cell_type": "markdown", "id": "c5430b40-1a5f-40df-9e13-529ef3ece4ce", @@ -94,12 +58,12 @@ "source": [ "## Allocate Google Cloud Storage\n", "\n", - "Now that we have verified the permissions we can now create a bucket. Buckets are where objects are stored and have a globally unique name.\n", + "Buckets are where objects are stored and have a globally unique name.\n", "\n", "To create a bucket we do the following:\n", " * Click **Navigation Menu** -> **Cloud Storage** (under Storage) -> **Browser** -> **+Create Bucket** (just under the blue bar) to open the *Create a bucket* page.\n", " * In *Name your bucket*:\n", - " * For **Name**, enter a globally unique name for the bucket (example \"essentials-test-myname-2021-01-01\")\n", + " * For **Name**, enter a globally unique name for the bucket (example \"**essentials-test-myname-2022-01-01**\")\n", " * Click **Continue**\n", " * In *Choose where to store your data*:\n", " * For *Location Type* select **Region** (cheapest and fastest)\n", @@ -129,9 +93,9 @@ "tags": [] }, "source": [ - "## Follow the Storage Allocation\n", + "## Track the Storage Allocation\n", "\n", - "Just as with compute, we will audit (follow) the bucket creation by examining at the project *activity*.\n", + "Just as with compute, we will track (follow) the bucket creation by examining at the project *activity*.\n", "\n", "To view the project activity we do the following:\n", "\n", @@ -150,15 +114,13 @@ "tags": [] }, "source": [ - "## Enumerate the Buckets\n", + "## List the Buckets\n", "\n", "Now lets find and examine the bucket. To view a bucket we do the following:\n", "\n", " * Navigate to the Google Storage page by clicking **Navigation Menu** -> **Cloud Storage** (under Storage) -> **Browser**. \n", " * **Find** the bucket you just created. You can use the filter to find a bucket if there are a lot of them.\n", - " * Click on the bucket name to open the **bucket details** (it will display as a hyperlink when you hover over the bucket name).\n", - "\n", - "Navigate to the **dashboard** and you will now see \"Storage\" in the *resources* card under. You can click on this to quickly navigate to the storage page." + " * Click on the bucket name to open the **bucket details** (it will display as a hyperlink when you hover over the bucket name).\n" ] }, { @@ -170,12 +132,12 @@ "source": [ "## Review what is Important\n", "\n", - "It is always important to review what is important to you. It may be cost, or keeping the data secure. Later on we will show how to monitor overall costs.\n", + "It is always important to review what is important to you. It may be cost, or keeping the data secure. Later on we will show how to monitor overall costs. We will also learn how to use the \"info panel\" to show more information about a bucket.\n", "\n", "For Drew, we will review that the bucket **public access** is *not public* by doing the following:\n", " * Go to **Navigation Menu -> Cloud Storage -> Browser**\n", " * Select the bucket of interest by **checking the box** next to the Bucket name.\n", - " * In the Right Side Bar (open if necessary) in the **Permissions** tab in the **Public Access** card you should see **Not Public**. This means that public access prevention is turned on.\n", + " * In the **Info Panel** (click show \"Info Panel\" if necessary) in the **Permissions** tab in the **Public Access** card you should see **Not Public**. This means that public access prevention is turned on.\n", " * You can also see the **permissions** for the bucket in the bottom of the bar." ] }, @@ -225,23 +187,11 @@ "\n", "![storage-delete-bucket](img/storage-delete-bucket.png)\n", "\n", - "Did you \"Follow\" the bucket by looking at the **activity** page as discussed above?\n", + "Did you \"Track\" the bucket by looking at the **activity** page as discussed above?\n", "\n", "Since we care about paying for resources we are not using we review our project by visiting the *compute storage* service and reviewing that we no longer have any *Buckets* allocated. " ] }, - { - "cell_type": "markdown", - "id": "3a28e28d-1d70-44fa-a952-4f3506ea85ec", - "metadata": {}, - "source": [ - "## Discussion (Optional)\n", - "\n", - "* What does the words \"Secure\", \"Allocate\", \"Follow\", and \"Enumerate\" spell?\n", - "* What happens when you add the \"R\" in \"Review?\"\n", - "* Is this useful?" - ] - }, { "cell_type": "markdown", "id": "97d7ebc5-4a81-4f1a-aaf3-517adf70640a", diff --git a/content/GCP/04_intro_to_cli.ipynb b/content/GCP/04_intro_to_cli.ipynb index 72499e4..957463e 100644 --- a/content/GCP/04_intro_to_cli.ipynb +++ b/content/GCP/04_intro_to_cli.ipynb @@ -10,7 +10,7 @@ "```{admonition} Overview\n", ":class: tip\n", "\n", - "**Teaching:** 40 min\n", + "**Teaching:** 20 min\n", "\n", "**Exercises:** 5 min\n", "\n", @@ -23,8 +23,7 @@ "* Use basic cloud CLI commands (`gcloud` and `gsutil`).\n", "* Verify basic settings.\n", "* Use environment variables for configuration.\n", - "* Understand the importance of using variables for configuration.\n", - "* Recognize the value of reproducibility and automation.\n", + "* Understand the importance of using variables reproducibility and automation.\n", "``` " ] }, @@ -50,7 +49,7 @@ "\n", "The cloud can be controlled using a Command Line Interface (CLI) or a programming language such as Python. Collectively these tools interact with the cloud over a Application Programming Interface (API) and this capability forms the basis of the cloud, the ability to control infrastructure programmatically.\n", "\n", - "Just as with navigating the web console it is important to know the **who**, **what**, and **where** of CLI access to reduce the possibility of access mistakes. We will first verify the tools are installed and configured correctly. Next we get the Account being used (who) and the Project ID of the active project (where) using the `gcloud` command. We will then use the `gcloud` and `gsutil` commands to list some public Buckets (what).\n", + "Just as with navigating the web console it is important to know the **who**, **where**, and **what** of CLI access to reduce the possibility of access mistakes. We will first verify the tools are installed and configured correctly. Next we get the Account being used (who) and the Project ID of the active project (where) using the `gcloud` command. We will then use the `gcloud` and `gsutil` commands to list some public Buckets (what).\n", "\n", "The `gcloud` command is used to control most aspects of GCP and the `gsutil` command is used to control Google Cloud Storage Buckets. To access the manual pages for a command just add `--help` to the end of the command or run `gcloud help` for more information.\n", "\n", @@ -97,7 +96,7 @@ "tags": [] }, "source": [ - "## Verify the Configuration (Who, What, Where)\n", + "## Verify the Configuration (Who, Where, What)\n", "\n", "First, let's verify that the Account being used for access (who) is what we expect." ] @@ -148,57 +147,12 @@ "gcloud config get-value project" ] }, - { - "cell_type": "markdown", - "id": "c2972d7b-f393-42b5-8330-cf8292d28afb", - "metadata": {}, - "source": [ - "Now we will use `gcloud` to list a well known public bucket (what). " - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "id": "617325c9-d853-4291-a1db-938ab9439fee", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "gs://gcp-public-data-landsat/index.csv.gz\n", - "gs://gcp-public-data-landsat/LC08/\n", - "gs://gcp-public-data-landsat/LE07/\n", - "gs://gcp-public-data-landsat/LM01/\n", - "gs://gcp-public-data-landsat/LM02/\n", - "gs://gcp-public-data-landsat/LM03/\n", - "gs://gcp-public-data-landsat/LM04/\n", - "gs://gcp-public-data-landsat/LM05/\n", - "gs://gcp-public-data-landsat/LO08/\n", - "gs://gcp-public-data-landsat/LT04/\n", - "gs://gcp-public-data-landsat/LT05/\n", - "gs://gcp-public-data-landsat/LT08/\n" - ] - } - ], - "source": [ - "gcloud alpha storage ls gs://gcp-public-data-landsat" - ] - }, - { - "cell_type": "markdown", - "id": "42e6ad5b-186d-4cd1-ba03-e7c85ad40e38", - "metadata": {}, - "source": [ - "*Advanced Callout: The `alpha` (and `beta`) command allows us to access commands that have not been released for production and care should be taken when using these in a production environment. At this time this is not the recommended way to access storage buckets, but it does help verify that everything is working correctly.*" - ] - }, { "cell_type": "markdown", "id": "1389ca4f-7234-4ea3-9ad8-85914d88ede5", "metadata": {}, "source": [ - "Finally, we will verify that the separate and preferred `gsutil` command is installed and working by listing the same well known public bucket. " + "Now we will use `gsutil` to list a well known public bucket (what). The `gsutil` command is how we access Google Cloud Storage, most other services use the `gcloud` command." ] }, { diff --git a/content/GCP/06_running_analysis.ipynb b/content/GCP/06_running_analysis.ipynb index f5bfdd1..5a725e7 100644 --- a/content/GCP/06_running_analysis.ipynb +++ b/content/GCP/06_running_analysis.ipynb @@ -10,7 +10,7 @@ "```{admonition} Overview\n", ":class: tip\n", "\n", - "**Teaching:** 80 min\n", + "**Teaching:** 60 min\n", "\n", "**Exercises:** 8 min\n", "\n", @@ -62,9 +62,9 @@ "````{admonition} Exercise\n", "\n", "Using the console navigate to the \"Compute Engine\" service and create a new VM with the following properties.\n", - " * Call the VM \"essentials\"\n", - " * Allow the VM \"Full\" access to \"Storage\". This can be found under \"Identity and API\" on the \"create an instance\" page and then selecting \"Set access for each API\" and change \"Storage\" to \"Full\". **This will allow the VM to create, read, write, and delete all storage buckets in the project\"**\n", - " * Select a bit larger VM by changing the machine type to something larger, for example an \"e2-standard-2\".\n", + " * Call the VM instance \"essentials\"\n", + " * Allow the VM instance \"Full\" access to \"Storage\". This can be found under \"Identity and API\" on the \"create an instance\" page and then selecting \"Set access for each API\" and change \"Storage\" to \"Full\". **This will allow the VM to create, read, write, and delete all storage buckets in the project\"**\n", + " * Select a bit larger VM instance by changing the machine type to \"e2-standard-2\".\n", "````" ] }, @@ -151,10 +151,11 @@ "Hit:2 http://deb.debian.org/debian buster InRelease\n", "Hit:3 http://deb.debian.org/debian buster-updates InRelease\n", "Hit:4 http://deb.debian.org/debian buster-backports InRelease\n", - "Get:5 http://packages.cloud.google.com/apt cloud-sdk-buster InRelease [6780 B]\n", - "Hit:6 http://packages.cloud.google.com/apt google-cloud-packages-archive-keyring-buster InRelease\n", + "Hit:5 http://packages.cloud.google.com/apt cloud-sdk-buster InRelease\n", + "Get:6 http://packages.cloud.google.com/apt google-cloud-packages-archive-keyring-buster InRelease [5553 B]\n", "Hit:7 http://packages.cloud.google.com/apt google-compute-engine-buster-stable InRelease\n", - "Fetched 6780 B in 1s (8978 B/s)\n", + "Get:8 http://packages.cloud.google.com/apt google-cloud-packages-archive-keyring-buster/main amd64 Packages [389 B]\n", + "Fetched 5942 B in 1s (7839 B/s)33m0m\u001b[33m\n", "Reading package lists... Done\n", "Building dependency tree \n", "Reading state information... Done\n", @@ -200,12 +201,12 @@ "name": "stdout", "output_type": "stream", "text": [ - "bucket: essentials-learner-2022-02-07 region: us-west2\n" + "bucket: essentials-learner-2022-02-14 region: us-west2\n" ] } ], "source": [ - "BUCKET=\"essentials-${USER}-$(date +%F)\"\n", + "BUCKET=\"essentials-$USER-$(date +%F)\"\n", "REGION=\"us-west2\"\n", "echo \"bucket: $BUCKET region: $REGION\"" ] @@ -220,12 +221,12 @@ "name": "stdout", "output_type": "stream", "text": [ - "Creating gs://essentials-learner-2022-02-07/...\n" + "Creating gs://essentials-learner-2022-02-14/...\n" ] } ], "source": [ - "gsutil mb -b on -l $REGION --pap enforced \"gs://$BUCKET\"" + "gsutil mb -b on -l $REGION --pap enforced gs://$BUCKET" ] }, { @@ -246,7 +247,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "gs://essentials-learner-2022-02-07/\n" + "gs://essentials-learner-2022-02-14/\n" ] } ], @@ -288,7 +289,7 @@ }, { "cell_type": "code", - "execution_count": 11, + "execution_count": 6, "id": "96db6a66-3fbf-419a-b8c8-dbb27639e990", "metadata": {}, "outputs": [], @@ -298,7 +299,7 @@ }, { "cell_type": "code", - "execution_count": 12, + "execution_count": 7, "id": "36554c99-ba08-4733-8ef2-e68d42d0d2b7", "metadata": {}, "outputs": [ @@ -307,11 +308,9 @@ "output_type": "stream", "text": [ "Cloning into 'CLASS-Examples'...\n", - "remote: Enumerating objects: 4, done.\u001b[K\n", - "remote: Counting objects: 100% (4/4), done.\u001b[K\n", - "remote: Compressing objects: 100% (4/4), done.\u001b[K\n", - "remote: Total 70 (delta 0), reused 1 (delta 0), pack-reused 66\u001b[K\n", - "Unpacking objects: 100% (70/70), done.\n" + "remote: Enumerating objects: 86, done.\u001b[K\n", + "remote: Total 86 (delta 0), reused 0 (delta 0), pack-reused 86\u001b[K\n", + "Unpacking objects: 100% (86/86), done.\n" ] } ], @@ -329,7 +328,7 @@ }, { "cell_type": "code", - "execution_count": 13, + "execution_count": 8, "id": "90c1cda7-60d4-44bb-84f8-e776a77a94ab", "metadata": {}, "outputs": [], @@ -350,7 +349,7 @@ }, { "cell_type": "code", - "execution_count": 14, + "execution_count": 9, "id": "55b628d5-6e5c-45a5-9cd3-c129db9cdcd2", "metadata": {}, "outputs": [ @@ -359,13 +358,13 @@ "output_type": "stream", "text": [ "total 28\n", - "-rw-r--r-- 1 learner learner 964 Feb 7 16:04 ReadMe.md\n", - "-rw-r--r-- 1 learner learner 72 Feb 7 16:04 clean.sh\n", - "-rw-r--r-- 1 learner learner 280 Feb 7 16:04 download.sh\n", - "-rw-r--r-- 1 learner learner 314 Feb 7 16:04 get-index.sh\n", - "-rw-r--r-- 1 learner learner 613 Feb 7 16:04 process_sat.py\n", - "-rw-r--r-- 1 learner learner 76 Feb 7 16:04 search.json\n", - "-rw-r--r-- 1 learner learner 783 Feb 7 16:04 search.py\n" + "-rw-r--r-- 1 learner learner 964 Feb 14 20:30 ReadMe.md\n", + "-rw-r--r-- 1 learner learner 72 Feb 14 20:30 clean.sh\n", + "-rw-r--r-- 1 learner learner 280 Feb 14 20:30 download.sh\n", + "-rw-r--r-- 1 learner learner 314 Feb 14 20:30 get-index.sh\n", + "-rw-r--r-- 1 learner learner 613 Feb 14 20:30 process_sat.py\n", + "-rw-r--r-- 1 learner learner 76 Feb 14 20:30 search.json\n", + "-rw-r--r-- 1 learner learner 783 Feb 14 20:30 search.py\n" ] } ], @@ -378,7 +377,7 @@ "id": "2b8b3144-7453-4350-a1cf-7fa74af2bcbf", "metadata": {}, "source": [ - "## Access the bucket\n", + "## Access the Bucket\n", "\n", "Now we need to verify that Drew has access to the analysis data. \n", "\n", @@ -387,7 +386,7 @@ }, { "cell_type": "code", - "execution_count": 15, + "execution_count": 10, "id": "e56ab74a-ae6d-4602-a26b-4a2656bd40cd", "metadata": {}, "outputs": [ @@ -419,7 +418,7 @@ "id": "9e16e8b5-a178-492a-aa80-5affe721b6ca", "metadata": {}, "source": [ - "## Getting the data" + "## Getting the Metadata" ] }, { @@ -436,7 +435,7 @@ }, { "cell_type": "code", - "execution_count": 16, + "execution_count": 11, "id": "bbe85b75-c7cd-40ed-a3b0-37cbd0a5f52e", "metadata": {}, "outputs": [ @@ -454,7 +453,7 @@ }, { "cell_type": "code", - "execution_count": 17, + "execution_count": 12, "id": "18a9b71c-5871-4ce2-a202-b48ad04e8d38", "metadata": {}, "outputs": [ @@ -468,7 +467,7 @@ "feature is enabled by default but requires that compiled crcmod be\n", "installed (see \"gsutil help crcmod\").\n", "\n", - "/ [1 files][731.9 MiB/731.9 MiB] 63.0 MiB/s \n", + "/ [1 files][731.9 MiB/731.9 MiB] \n", "Operation completed over 1 objects/731.9 MiB. \n" ] } @@ -487,7 +486,7 @@ }, { "cell_type": "code", - "execution_count": 18, + "execution_count": 13, "id": "2cdaf24c-c4aa-4e80-9236-939e7c982916", "metadata": {}, "outputs": [], @@ -505,7 +504,7 @@ }, { "cell_type": "code", - "execution_count": 19, + "execution_count": 14, "id": "b005876c-f9af-43d6-80c6-f04295413b9b", "metadata": {}, "outputs": [ @@ -514,7 +513,7 @@ "output_type": "stream", "text": [ "total 2.5G\n", - "-rw-r--r-- 1 learner learner 2.5G Feb 7 16:04 index.csv\n" + "-rw-r--r-- 1 learner learner 2.5G Feb 14 20:32 index.csv\n" ] } ], @@ -532,7 +531,7 @@ }, { "cell_type": "code", - "execution_count": 20, + "execution_count": 15, "id": "ffe969db-d207-44fe-8957-8d129c76ee8f", "metadata": {}, "outputs": [ @@ -553,11 +552,25 @@ }, { "cell_type": "markdown", - "id": "f98c38de-87fa-4e66-9d4b-186fbf81b3b2", + "id": "588ddd30-af8f-4378-b543-290c4c6f0840", "metadata": {}, "source": [ "````{admonition} Tip\n", - ":class: Tip\n", + ":class: tip\n", + "To run the above commands in one step run\n", + "\n", + "```\n", + "bash get-index.sh\n", + "```\n", + "````" + ] + }, + { + "cell_type": "markdown", + "id": "f98c38de-87fa-4e66-9d4b-186fbf81b3b2", + "metadata": {}, + "source": [ + "````{admonition} Break (Optional)\n", "\n", "Now our virtual machine instance is ready and we can access the code and data. Now is a great time to take a short break.\n", "````" @@ -568,14 +581,14 @@ "id": "532e6da3-302a-4e8a-8570-752995f30f1d", "metadata": {}, "source": [ - "## Search for Data\n", + "## Getting the Data\n", "\n", "We can see the data is well formed and what we expect. We will now use this data to download data related to a specific point and for the Landsat 8. The following script does a simple filter." ] }, { "cell_type": "code", - "execution_count": 21, + "execution_count": 16, "id": "c5e300c3-e1f3-4cd4-9679-77725e61c4db", "metadata": {}, "outputs": [ @@ -619,7 +632,7 @@ }, { "cell_type": "code", - "execution_count": 22, + "execution_count": 17, "id": "c9872510-4265-4b0e-aeb5-5a829ff69b24", "metadata": {}, "outputs": [ @@ -649,7 +662,7 @@ }, { "cell_type": "code", - "execution_count": 23, + "execution_count": 18, "id": "6912a9ec-0f9b-4500-ba20-d4280592b323", "metadata": {}, "outputs": [ @@ -673,14 +686,12 @@ "id": "a76f24f8-3b2d-4c0d-880b-f2911b9d9b84", "metadata": {}, "source": [ - "## Download the Data\n", - "\n", "Now that we have a list of folders we are interested, we will now download them with a simple script that takes bucket addresses (URL's) and downloads them with the `gsutil` program." ] }, { "cell_type": "code", - "execution_count": 24, + "execution_count": 19, "id": "3572c518-df83-4906-bfa6-a37bde2a5063", "metadata": {}, "outputs": [ @@ -715,7 +726,7 @@ }, { "cell_type": "code", - "execution_count": 25, + "execution_count": 20, "id": "cccec3e1-0dcd-4e3b-a059-a884f5219b66", "metadata": { "scrolled": true, @@ -729,16 +740,16 @@ "+++ gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20201007_20201016_01_T1\n", "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20201007_20201016_01_T1/LC08_L1TP_025033_20201007_20201016_01_T1_ANG.txt...\n", "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20201007_20201016_01_T1/LC08_L1TP_025033_20201007_20201016_01_T1_B1.TIF...\n", - "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20201007_20201016_01_T1/LC08_L1TP_025033_20201007_20201016_01_T1_B10.TIF...\n", - "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20201007_20201016_01_T1/LC08_L1TP_025033_20201007_20201016_01_T1_B3.TIF...\n", "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20201007_20201016_01_T1/LC08_L1TP_025033_20201007_20201016_01_T1_B11.TIF...\n", - "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20201007_20201016_01_T1/LC08_L1TP_025033_20201007_20201016_01_T1_B4.TIF...\n", + "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20201007_20201016_01_T1/LC08_L1TP_025033_20201007_20201016_01_T1_B10.TIF...\n", "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20201007_20201016_01_T1/LC08_L1TP_025033_20201007_20201016_01_T1_B2.TIF...\n", - "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20201007_20201016_01_T1/LC08_L1TP_025033_20201007_20201016_01_T1_B6.TIF...\n", + "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20201007_20201016_01_T1/LC08_L1TP_025033_20201007_20201016_01_T1_B3.TIF...\n", "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20201007_20201016_01_T1/LC08_L1TP_025033_20201007_20201016_01_T1_B5.TIF...\n", + "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20201007_20201016_01_T1/LC08_L1TP_025033_20201007_20201016_01_T1_B6.TIF...\n", + "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20201007_20201016_01_T1/LC08_L1TP_025033_20201007_20201016_01_T1_B4.TIF...\n", "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20201007_20201016_01_T1/LC08_L1TP_025033_20201007_20201016_01_T1_B7.TIF...\n", "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20201007_20201016_01_T1/LC08_L1TP_025033_20201007_20201016_01_T1_B8.TIF...\n", - "==> NOTE: You are downloading one or more large file(s), which would\n", + "==> NOTE: You are downloading one or more large file(s), which would \n", "run significantly faster if you enabled sliced object downloads. This\n", "feature is enabled by default but requires that compiled crcmod be\n", "installed (see \"gsutil help crcmod\").\n", @@ -746,19 +757,19 @@ "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20201007_20201016_01_T1/LC08_L1TP_025033_20201007_20201016_01_T1_B9.TIF...\n", "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20201007_20201016_01_T1/LC08_L1TP_025033_20201007_20201016_01_T1_BQA.TIF...\n", "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20201007_20201016_01_T1/LC08_L1TP_025033_20201007_20201016_01_T1_MTL.txt...\n", - "| [14/14 files][952.9 MiB/952.9 MiB] 100% Done 18.6 MiB/s ETA 00:00:00 \n", + "\\ [14/14 files][952.9 MiB/952.9 MiB] 100% Done \n", "Operation completed over 14 objects/952.9 MiB. \n", "+++ gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20210519_20210528_01_T1\n", - "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20210519_20210528_01_T1/LC08_L1TP_025033_20210519_20210528_01_T1_ANG.txt...\n", - "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20210519_20210528_01_T1/LC08_L1TP_025033_20210519_20210528_01_T1_B3.TIF...\n", "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20210519_20210528_01_T1/LC08_L1TP_025033_20210519_20210528_01_T1_B1.TIF...\n", + "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20210519_20210528_01_T1/LC08_L1TP_025033_20210519_20210528_01_T1_B10.TIF...\n", "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20210519_20210528_01_T1/LC08_L1TP_025033_20210519_20210528_01_T1_B4.TIF...\n", + "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20210519_20210528_01_T1/LC08_L1TP_025033_20210519_20210528_01_T1_B3.TIF...\n", + "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20210519_20210528_01_T1/LC08_L1TP_025033_20210519_20210528_01_T1_ANG.txt...\n", "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20210519_20210528_01_T1/LC08_L1TP_025033_20210519_20210528_01_T1_B11.TIF...\n", - "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20210519_20210528_01_T1/LC08_L1TP_025033_20210519_20210528_01_T1_B10.TIF...\n", - "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20210519_20210528_01_T1/LC08_L1TP_025033_20210519_20210528_01_T1_B2.TIF...\n", "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20210519_20210528_01_T1/LC08_L1TP_025033_20210519_20210528_01_T1_B5.TIF...\n", - "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20210519_20210528_01_T1/LC08_L1TP_025033_20210519_20210528_01_T1_B7.TIF...\n", "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20210519_20210528_01_T1/LC08_L1TP_025033_20210519_20210528_01_T1_B6.TIF...\n", + "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20210519_20210528_01_T1/LC08_L1TP_025033_20210519_20210528_01_T1_B7.TIF...\n", + "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20210519_20210528_01_T1/LC08_L1TP_025033_20210519_20210528_01_T1_B2.TIF...\n", "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20210519_20210528_01_T1/LC08_L1TP_025033_20210519_20210528_01_T1_B8.TIF...\n", "==> NOTE: You are downloading one or more large file(s), which would \n", "run significantly faster if you enabled sliced object downloads. This\n", @@ -768,7 +779,7 @@ "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20210519_20210528_01_T1/LC08_L1TP_025033_20210519_20210528_01_T1_B9.TIF...\n", "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20210519_20210528_01_T1/LC08_L1TP_025033_20210519_20210528_01_T1_BQA.TIF...\n", "Copying gs://gcp-public-data-landsat/LC08/01/025/033/LC08_L1TP_025033_20210519_20210528_01_T1/LC08_L1TP_025033_20210519_20210528_01_T1_MTL.txt...\n", - "- [14/14 files][ 1.0 GiB/ 1.0 GiB] 100% Done 19.1 MiB/s ETA 00:00:00 \n", + "| [14/14 files][ 1.0 GiB/ 1.0 GiB] 100% Done 37.1 MiB/s ETA 00:00:00 \n", "Operation completed over 14 objects/1.0 GiB. \n" ] } @@ -787,7 +798,7 @@ }, { "cell_type": "code", - "execution_count": 26, + "execution_count": 21, "id": "a37c1567-14b5-4dc7-bc27-d1b84411fce1", "metadata": {}, "outputs": [ @@ -796,9 +807,9 @@ "output_type": "stream", "text": [ "total 2564796\n", - "drwxr-xr-x 2 learner learner 4096 Feb 7 16:07 \u001b[0m\u001b[01;34mLC08_L1TP_025033_20201007_20201016_01_T1\u001b[0m\n", - "drwxr-xr-x 2 learner learner 4096 Feb 7 16:08 \u001b[01;34mLC08_L1TP_025033_20210519_20210528_01_T1\u001b[0m\n", - "-rw-r--r-- 1 learner learner 2626336574 Feb 7 16:04 index.csv\n" + "drwxr-xr-x 2 learner learner 4096 Feb 14 20:33 \u001b[0m\u001b[01;34mLC08_L1TP_025033_20201007_20201016_01_T1\u001b[0m\n", + "drwxr-xr-x 2 learner learner 4096 Feb 14 20:33 \u001b[01;34mLC08_L1TP_025033_20210519_20210528_01_T1\u001b[0m\n", + "-rw-r--r-- 1 learner learner 2626336574 Feb 14 20:32 index.csv\n" ] } ], @@ -806,6 +817,21 @@ "ls -l data" ] }, + { + "cell_type": "markdown", + "id": "0ca2e2ff-8276-4092-8a9e-ca754db5078e", + "metadata": {}, + "source": [ + "````{admonition} Tip\n", + ":class: tip\n", + "To run the above analysis in one step run\n", + "\n", + "```\n", + "bash get-data.sh\n", + "```\n", + "````" + ] + }, { "cell_type": "markdown", "id": "6073d5f6-68ef-41df-8044-73f221ce8780", @@ -818,7 +844,7 @@ }, { "cell_type": "code", - "execution_count": 27, + "execution_count": 22, "id": "0c027e92-ae6f-4152-b8d6-5a70172de3e2", "metadata": {}, "outputs": [ @@ -852,9 +878,35 @@ "cat process_sat.py" ] }, + { + "cell_type": "markdown", + "id": "b8ba76ef-f0af-413a-99eb-33697ec87238", + "metadata": {}, + "source": [ + "This code writes to the output folder, so let's create it" + ] + }, { "cell_type": "code", - "execution_count": 28, + "execution_count": 25, + "id": "0d6f55c9-3e2c-47d7-b4d0-b984fedc110e", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "mkdir: created directory 'output/'\n" + ] + } + ], + "source": [ + "mkdir -v output/" + ] + }, + { + "cell_type": "code", + "execution_count": 23, "id": "77999a80-7cfd-46d3-86a6-05d199f7e66d", "metadata": {}, "outputs": [ @@ -885,7 +937,7 @@ }, { "cell_type": "code", - "execution_count": 29, + "execution_count": 24, "id": "b5a2b29b-9c1d-4376-a3eb-4a3dc4bac160", "metadata": { "scrolled": true, @@ -896,7 +948,6 @@ "name": "stdout", "output_type": "stream", "text": [ - "mkdir: created directory 'output/'\n", "Reading package lists... Done\n", "Building dependency tree \n", "Reading state information... Done\n", @@ -942,8 +993,8 @@ "0 upgraded, 79 newly installed, 0 to remove and 2 not upgraded.\n", "Need to get 46.8 MB of archives.\n", "After this operation, 172 MB of additional disk space will be used.\n", - "Get:1 http://deb.debian.org/debian buster/main amd64 poppler-data all 0.4.9-2 [1473 kB]\n", - "Get:2 http://security.debian.org/debian-security buster/updates/main amd64 libicu63 amd64 63.1-6+deb10u2 [8300 kB]\n", + "Get:1 http://security.debian.org/debian-security buster/updates/main amd64 libicu63 amd64 63.1-6+deb10u2 [8300 kB]\n", + "Get:2 http://deb.debian.org/debian buster/main amd64 poppler-data all 0.4.9-2 [1473 kB]\n", "Get:3 http://deb.debian.org/debian buster/main amd64 fonts-dejavu-core all 2.37-1 [1068 kB]\n", "Get:4 http://deb.debian.org/debian buster/main amd64 fontconfig-config all 2.13.1-2 [280 kB]\n", "Get:5 http://deb.debian.org/debian buster/main amd64 gdal-data all 2.4.0+dfsg-1 [744 kB]\n", @@ -951,26 +1002,26 @@ "Get:7 http://deb.debian.org/debian buster/main amd64 libgfortran5 amd64 8.3.0-6 [581 kB]\n", "Get:8 http://deb.debian.org/debian buster/main amd64 libblas3 amd64 3.8.0-2 [148 kB]\n", "Get:9 http://deb.debian.org/debian buster/main amd64 liblapack3 amd64 3.8.0-2 [2110 kB]\n", - "Get:10 http://security.debian.org/debian-security buster/updates/main amd64 libtiff5 amd64 4.1.0+git191117-2~deb10u3 [271 kB]\n", - "Get:11 http://security.debian.org/debian-security buster/updates/main amd64 liburiparser1 amd64 0.9.1-1+deb10u1 [48.1 kB]\n", - "Get:12 http://security.debian.org/debian-security buster/updates/main amd64 libnss3 amd64 2:3.42.1-1+deb10u5 [1160 kB]\n", - "Get:13 http://security.debian.org/debian-security buster/updates/main amd64 libpq5 amd64 11.14-0+deb10u1 [171 kB]\n", - "Get:14 http://security.debian.org/debian-security buster/updates/main amd64 python3-lxml amd64 4.3.2-1+deb10u4 [1163 kB]\n", - "Get:15 http://deb.debian.org/debian buster/main amd64 libarpack2 amd64 3.7.0-2 [102 kB]\n", - "Get:16 http://deb.debian.org/debian buster/main amd64 libsuperlu5 amd64 5.2.1+dfsg1-4 [161 kB]\n", - "Get:17 http://deb.debian.org/debian buster/main amd64 libarmadillo9 amd64 1:9.200.7+dfsg-1 [88.6 kB]\n", - "Get:18 http://deb.debian.org/debian buster/main amd64 libcharls2 amd64 2.0.0+dfsg-1 [64.3 kB]\n", - "Get:19 http://deb.debian.org/debian buster/main amd64 libxml2 amd64 2.9.4+dfsg1-7+deb10u2 [689 kB]\n", - "Get:20 http://deb.debian.org/debian buster/main amd64 libdap25 amd64 3.20.3-1 [557 kB]\n", - "Get:21 http://deb.debian.org/debian buster/main amd64 libdapclient6v5 amd64 3.20.3-1 [202 kB]\n", - "Get:22 http://deb.debian.org/debian buster/main amd64 libdapserver7v5 amd64 3.20.3-1 [131 kB]\n", - "Get:23 http://deb.debian.org/debian buster/main amd64 libepsilon1 amd64 0.9.2+dfsg-4 [42.0 kB]\n", - "Get:24 http://deb.debian.org/debian buster/main amd64 libfontconfig1 amd64 2.13.1-2 [346 kB]\n", - "Get:25 http://deb.debian.org/debian buster/main amd64 libfreexl1 amd64 1.0.5-3 [34.1 kB]\n", - "Get:26 http://deb.debian.org/debian buster/main amd64 libfyba0 amd64 4.1.1-6 [114 kB]\n", - "Get:27 http://deb.debian.org/debian buster/main amd64 libgeos-3.7.1 amd64 3.7.1-1 [735 kB]\n", - "Get:28 http://deb.debian.org/debian buster/main amd64 libgeos-c1v5 amd64 3.7.1-1 [299 kB]\n", - "Get:29 http://deb.debian.org/debian buster/main amd64 proj-data all 5.2.0-1 [6986 kB]\n", + "Get:10 http://deb.debian.org/debian buster/main amd64 libarpack2 amd64 3.7.0-2 [102 kB]\n", + "Get:11 http://deb.debian.org/debian buster/main amd64 libsuperlu5 amd64 5.2.1+dfsg1-4 [161 kB]\n", + "Get:12 http://deb.debian.org/debian buster/main amd64 libarmadillo9 amd64 1:9.200.7+dfsg-1 [88.6 kB]\n", + "Get:13 http://deb.debian.org/debian buster/main amd64 libcharls2 amd64 2.0.0+dfsg-1 [64.3 kB]\n", + "Get:14 http://deb.debian.org/debian buster/main amd64 libxml2 amd64 2.9.4+dfsg1-7+deb10u2 [689 kB]\n", + "Get:15 http://deb.debian.org/debian buster/main amd64 libdap25 amd64 3.20.3-1 [557 kB]\n", + "Get:16 http://deb.debian.org/debian buster/main amd64 libdapclient6v5 amd64 3.20.3-1 [202 kB]\n", + "Get:17 http://deb.debian.org/debian buster/main amd64 libdapserver7v5 amd64 3.20.3-1 [131 kB]\n", + "Get:18 http://deb.debian.org/debian buster/main amd64 libepsilon1 amd64 0.9.2+dfsg-4 [42.0 kB]\n", + "Get:19 http://deb.debian.org/debian buster/main amd64 libfontconfig1 amd64 2.13.1-2 [346 kB]\n", + "Get:20 http://deb.debian.org/debian buster/main amd64 libfreexl1 amd64 1.0.5-3 [34.1 kB]\n", + "Get:21 http://deb.debian.org/debian buster/main amd64 libfyba0 amd64 4.1.1-6 [114 kB]\n", + "Get:22 http://security.debian.org/debian-security buster/updates/main amd64 libtiff5 amd64 4.1.0+git191117-2~deb10u3 [271 kB]\n", + "Get:23 http://deb.debian.org/debian buster/main amd64 libgeos-3.7.1 amd64 3.7.1-1 [735 kB]\n", + "Get:24 http://security.debian.org/debian-security buster/updates/main amd64 liburiparser1 amd64 0.9.1-1+deb10u1 [48.1 kB]\n", + "Get:25 http://deb.debian.org/debian buster/main amd64 libgeos-c1v5 amd64 3.7.1-1 [299 kB]\n", + "Get:26 http://security.debian.org/debian-security buster/updates/main amd64 libnss3 amd64 2:3.42.1-1+deb10u5 [1160 kB]\n", + "Get:27 http://deb.debian.org/debian buster/main amd64 proj-data all 5.2.0-1 [6986 kB]\n", + "Get:28 http://security.debian.org/debian-security buster/updates/main amd64 libpq5 amd64 11.14-0+deb10u1 [171 kB]\n", + "Get:29 http://security.debian.org/debian-security buster/updates/main amd64 python3-lxml amd64 4.3.2-1+deb10u4 [1163 kB]\n", "Get:30 http://deb.debian.org/debian buster/main amd64 libproj13 amd64 5.2.0-1 [225 kB]\n", "Get:31 http://deb.debian.org/debian buster/main amd64 libjbig0 amd64 2.1-3.1+b2 [31.0 kB]\n", "Get:32 http://deb.debian.org/debian buster/main amd64 libjpeg62-turbo amd64 1:1.5.2-2+deb10u1 [133 kB]\n", @@ -1021,11 +1072,11 @@ "Get:77 http://deb.debian.org/debian buster/main amd64 python3-pyparsing all 2.2.0+dfsg1-2 [89.6 kB]\n", "Get:78 http://deb.debian.org/debian buster/main amd64 python3-snuggs all 1.4.3-1 [7228 B]\n", "Get:79 http://deb.debian.org/debian buster/main amd64 python3-rasterio amd64 1.0.21-1 [818 kB]\n", - "Fetched 46.8 MB in 5s (8830 kB/s) \n", + "Fetched 46.8 MB in 1s (79.1 MB/s) \n", "Extracting templates from packages: 100%\n", "Preconfiguring packages ...\n", "Selecting previously unselected package poppler-data.\n", - "(Reading database ... 57683 files and directories currently installed.)\n", + "(Reading database ... 57852 files and directories currently installed.)\n", "Preparing to unpack .../00-poppler-data_0.4.9-2_all.deb ...\n", "Unpacking poppler-data (0.4.9-2) ...\n", "Selecting previously unselected package fonts-dejavu-core.\n", @@ -1351,13 +1402,12 @@ } ], "source": [ - "mkdir -v output/\n", "sudo apt-get install python3-rasterio --yes" ] }, { "cell_type": "code", - "execution_count": 31, + "execution_count": 26, "id": "b9e367a0-26ce-42ce-bb04-6a432f41876e", "metadata": {}, "outputs": [ @@ -1373,7 +1423,7 @@ } ], "source": [ - "/usr/bin/python3 process_sat.py" + "python3 process_sat.py" ] }, { @@ -1394,7 +1444,7 @@ }, { "cell_type": "code", - "execution_count": 32, + "execution_count": 27, "id": "db9f26aa-6317-4834-8bf1-972c8b3cc032", "metadata": {}, "outputs": [ @@ -1403,8 +1453,8 @@ "output_type": "stream", "text": [ "total 192M\n", - "-rw-r--r-- 1 learner learner 192M Feb 7 16:09 \u001b[0m\u001b[01;35mresult-LC08_L1TP_025033_20201007_20201016_01_T1.png\u001b[0m\n", - "-rw-r--r-- 1 learner learner 910 Feb 7 16:09 result-LC08_L1TP_025033_20201007_20201016_01_T1.png.aux.xml\n" + "-rw-r--r-- 1 learner learner 192M Feb 14 20:36 \u001b[0m\u001b[01;35mresult-LC08_L1TP_025033_20201007_20201016_01_T1.png\u001b[0m\n", + "-rw-r--r-- 1 learner learner 910 Feb 14 20:36 result-LC08_L1TP_025033_20201007_20201016_01_T1.png.aux.xml\n" ] } ], @@ -1424,7 +1474,7 @@ }, { "cell_type": "code", - "execution_count": 33, + "execution_count": 28, "id": "9345472f-4ef3-490b-a80e-2462cd534c89", "metadata": {}, "outputs": [ @@ -1432,7 +1482,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "essentials-learner-2022-02-07\n" + "essentials-learner-2022-02-14\n" ] } ], @@ -1442,7 +1492,7 @@ }, { "cell_type": "code", - "execution_count": 34, + "execution_count": 29, "id": "27dfae96-faf2-4d5d-8a78-97781841f172", "metadata": {}, "outputs": [ @@ -1450,7 +1500,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "gs://essentials-learner-2022-02-07/\n" + "gs://essentials-learner-2022-02-14/\n" ] } ], @@ -1470,7 +1520,7 @@ }, { "cell_type": "code", - "execution_count": 35, + "execution_count": 30, "id": "681e6b1d-98bb-448a-a57e-f5674214effd", "metadata": {}, "outputs": [ @@ -1498,7 +1548,7 @@ } ], "source": [ - "gsutil -m cp -r \"output\" \"gs://$BUCKET\"" + "gsutil -m cp -r output gs://$BUCKET" ] }, { @@ -1511,7 +1561,7 @@ }, { "cell_type": "code", - "execution_count": 36, + "execution_count": 31, "id": "248c47be-625f-44f5-a6b6-919e8d8baafd", "metadata": {}, "outputs": [ @@ -1519,17 +1569,17 @@ "name": "stdout", "output_type": "stream", "text": [ - "gs://essentials-learner-2022-02-07/output/\n" + "gs://essentials-learner-2022-02-14/output/\n" ] } ], "source": [ - "gsutil ls \"gs://$BUCKET\"" + "gsutil ls gs://$BUCKET" ] }, { "cell_type": "code", - "execution_count": 37, + "execution_count": 32, "id": "b1ea18e9-5861-4479-9948-3303952dee8a", "metadata": {}, "outputs": [ @@ -1537,14 +1587,14 @@ "name": "stdout", "output_type": "stream", "text": [ - "191.58 MiB 2022-02-07T16:09:41Z gs://essentials-learner-2022-02-07/output/result-LC08_L1TP_025033_20201007_20201016_01_T1.png\n", - " 910 B 2022-02-07T16:09:39Z gs://essentials-learner-2022-02-07/output/result-LC08_L1TP_025033_20201007_20201016_01_T1.png.aux.xml\n", + "191.58 MiB 2022-02-14T20:36:53Z gs://essentials-learner-2022-02-14/output/result-LC08_L1TP_025033_20201007_20201016_01_T1.png\n", + " 910 B 2022-02-14T20:36:51Z gs://essentials-learner-2022-02-14/output/result-LC08_L1TP_025033_20201007_20201016_01_T1.png.aux.xml\n", "TOTAL: 2 objects, 200890195 bytes (191.58 MiB)\n" ] } ], "source": [ - "gsutil ls -lh \"gs://$BUCKET/output\"" + "gsutil ls -lh gs://$BUCKET/output" ] }, { diff --git a/content/GCP/06_sharing_results.ipynb b/content/GCP/06_sharing_results.ipynb index 4537754..edc6e0b 100644 --- a/content/GCP/06_sharing_results.ipynb +++ b/content/GCP/06_sharing_results.ipynb @@ -23,6 +23,41 @@ "```" ] }, + { + "cell_type": "markdown", + "id": "c08aca46-0c53-4d44-aded-74a41eb38fdf", + "metadata": { + "tags": [] + }, + "source": [ + "## Security\n", + "\n", + "Everything in the cloud requires permission (authorization). Let's first verify that we have the permissions to create a bucket. A Bucket (a resource) is created within a project and inheres permissions from it.\n", + "\n", + "We are interested in what permissions that *your* account has for *your* project. To do this navigate to the IAM page (**Navigation Menu -> IAM & Admin -> IAM -> Permissions -> View By: Principals**). This shows the permissions for the project.\n", + "\n", + "*Note: There is a powerful filter box to limit the permissions shown.*\n", + "\n", + "You should see a row with your account shown in the Principal column. Here you should see the \"Editor\" Role in the Role column. A *role* is a collection of permissions managed by Google or someone else. The **Editor**, **Owner**, or the **Storage Admin** role for a project will *allow* *you* to create, access, and delete Buckets *in* the project.\n", + "\n", + "There are three important pieces of information that work together to form the **IAM policy**. The permission (role), the identity (principal), and the resource (project). This is another who (identity), what (permission), and where (resource)." + ] + }, + { + "cell_type": "markdown", + "id": "c37b2071-b51a-42fd-acdf-e05a53e7fc8e", + "metadata": { + "tags": [] + }, + "source": [ + "```{admonition} Exercise\n", + "\n", + "Answer the following questions:\n", + " * What is the \"Who, What, Where\" of the IAM policy that allows you to use your project?\n", + " * What else has permissions to do things in your project and state the \"Who, What, Where\"?\n", + "```" + ] + }, { "cell_type": "markdown", "id": "a1c11268-f389-405f-8d96-3c319a49b882", @@ -46,8 +81,8 @@ "source": [ "We will now add Members to a Bucket using the Web Console. We will use the Web Console to interactively build the policy binding by doing the following:\n", " * Navigation Menu -> **Storage/Cloud Storage** -> Browser -> Click on the Bucket Name (**Bucket Details**) -> Select the **Permissions** tab -> Click **Add** next to \"Permissions\" above the permissions list.\n", - " * In the \"New Principals\" box add the Identity for the collaborator (another individual) as directed by the instructor.\n", - " * Select the \"Storage Object Viewer\" by typing \"Storage Object Viewer\" in the filter and then selecting \"Storage Object Viewer\". Do not use any \"Legacy Storage\" roles.\n", + " * In the \"**New Principals**\" box add the Identity for the collaborator (another individual) as directed by the instructor.\n", + " * Select the \"**Storage Object Viewer**\" by typing \"Storage Object Viewer\" in the filter and then selecting \"Storage Object Viewer\". Do not use any \"Legacy Storage\" roles.\n", " * Click \"Save\" to save the policy.\n", " ![iam-storage-object-viewer](img/iam-storage-object-viewer.png)\n", " * Verify the policy is listed in the \"Permissions\" table on the \"Bucket Details\" page (you should now be on this page).\n", diff --git a/content/GCP/07_monitoring_costs.ipynb b/content/GCP/07_monitoring_costs.ipynb index 920bb05..781ba4e 100644 --- a/content/GCP/07_monitoring_costs.ipynb +++ b/content/GCP/07_monitoring_costs.ipynb @@ -10,7 +10,7 @@ "```{admonition} Overview\n", ":class: tip\n", "\n", - "**Teaching:** 30 min\n", + "**Teaching:** 10 min\n", "\n", "**Exercises:** none\n", "\n", diff --git a/content/GCP/08_cleaning_up_resources.ipynb b/content/GCP/08_cleaning_up_resources.ipynb index 3880c63..f4b7bc1 100644 --- a/content/GCP/08_cleaning_up_resources.ipynb +++ b/content/GCP/08_cleaning_up_resources.ipynb @@ -43,7 +43,7 @@ " * Try to delete the **VM Instance** on your own\n", "```\n", "\n", - "Navigate to the **Compute Engine** service and in the **VM Instances** page and remove the *VM Instance*.\n" + "Navigate to the **Compute Engine** service and in the **VM Instances** page and remove the *essentials* *VM Instance*.\n" ] }, { @@ -61,7 +61,7 @@ "\n", "```{admonition} Exercise\n", "\n", - " * Try to delete the storage **bucket* on your own\n", + " * Try to delete the storage **bucket** on your own\n", "```\n", "\n", "Navigate to the **Cloud Storage** service and in the **browser** page and remove the *essentials* Bucket." @@ -87,8 +87,8 @@ "\n", "To verify that the resources have been deleted do the following:\n", " * Navigate to the **VM Instances** page and **Cloud Storage** **Browser** to verify the resources have been deleted.\n", - " * Navigate to the **IAM & Admin** service and in the **Asset Inventory** page to verify that the resources have been deleted.\n", - " * Navigate to the **Activity** page and verify the deletion events. **Note:** the information collected in the asset inventory is often delayed.\n", + " * Navigate to the **IAM & Admin** service and in the **Asset Inventory** page to verify that the resources have been deleted. **Note:** the information collected in the asset inventory is often delayed.\n", + " * Navigate to the **Activity** page and verify the deletion events. \n", "\n", "```{admonition} Tip\n", ":class: tip\n", diff --git a/scripts/azure-create.sh b/scripts/azure-create.sh index fcbd3e5..fa1e16a 100755 --- a/scripts/azure-create.sh +++ b/scripts/azure-create.sh @@ -21,33 +21,40 @@ echo "+++ creating resource group $RESOURCE_GROUP $SUBSCRIPTION" az group create --resource-group $RESOURCE_GROUP --location $LOCATION echo "+++ creating VM $VM" -# Ubuntu is "Canonical:0001-com-ubuntu-server-focal:20_04-lts:latest" +# Ubuntu is "Canonical:0001-com-ubuntu-server-focal:20_04-lts-gen2:latest" # Debian is "Debian:debian-10:10:latest" # Resource Group scope "subscriptions/$SUBSCRIPTION/resourceGroups/$RESOURCE_GROUP" az vm create --resource-group $RESOURCE_GROUP --name $VM \ - --image Canonical:0001-com-ubuntu-server-focal:20_04-lts:latest \ + --image Canonical:0001-com-ubuntu-server-focal:20_04-lts-gen2:latest \ --size Standard_D2_v4 \ - --storage-sku Standard_LRS \ + --storage-sku StandardSSD_LRS \ --public-ip-sku Standard \ --assign-identity \ --scope "subscriptions/$SUBSCRIPTION/resourceGroups/$RESOURCE_GROUP" \ + --role Contributor \ --admin-username $NAME +echo "+++ get IP for $VM" IP=$(az vm show --name $VM --resource-group $RESOURCE_GROUP -d --query publicIps -otsv) -IDENTITY=$(az vm show --name $VM --resource-group $RESOURCE_GROUP --query identity.principalId -otsv) -echo "+++ assign the VM the Contributor role to the subscription ($IDENTITY to $SUBSCRIPTION)" -az role assignment create --assignee $IDENTITY --scope /subscriptions/$SUBSCRIPTION --role Contributor +echo "+++ configure ~/.ssh/$VM.config" +cat > ~/.ssh/$VM.config < ~/.ssh/$VM.config <