It takes a few steps to set up auto-scaling in AWS, and it’s commonly used to dynamically change the number of Amazon EC2 instances in a group to match shifting workloads. Here is a step-by-step tutorial for setting up auto-scaling on AWS:
1 Logging into the AWS Console: Using the login information for your AWS account, access the AWS Management Console.
2 Selecting or Building an Amazon Machine Image (AMI) The configuration of the EC2 instances you want to launch might be represented by an existing AMI or by a custom one that you generate.
3 Create a Launch Configuration: 1 Go to the EC2 Dashboard.
2 Select “Launch Configurations” from the left navigation pane’s “Auto Scaling” section.
3 The “Create launch configuration” button should be clicked.
Launch template name = highsky_template Template version description = template_highsky
4 Choose the AMI that you want.
Click = My AMIs And Click Amazon Machine Image (AMI) [ Image Name ] = auto_image
5 Set up the instance type, key pair, security groups, and, if necessary, any user data scripts.
Choose the instance type = t2.micro
Choose you’re Key = – – – – – –
Chooseyou’re Network Settings
6 After reviewing the settings, click “Create launch configuration.
Create launch template
4 Create an Auto Scaling Group:
1 Go to the EC2 Dashboard and select “Auto Scaling Groups” from the left navigation pane after creating the launch configuration.
2 Press the “Create an auto-scaling group” button.
Auto Scaling group name= Auto_scaling_group Launch template = highsky_template Click = Next
3 The launch configuration you made in the previous stage should be selected.
VPC = ( Default VPC ) Availability Zones and Subnets = [ yure Choose )w And Click = Next
4 Configure advanced options – optional:[ Choose a load balancer to distribute incoming traffic for your application across instances to make it more reliable and easily scalable. You can also set options that give you more control over health check replacements and monitoring.]
I’m Choose = No load balancer
5 Health checks [ Health checks increase availability by replacing unhealthy instances. When you use multiple health checks, all are evaluated, and if at least one fails, instance replacement occurs.]
Health check grace period = 180 Minute And Click = Next
6 Set the group’s desired capacity, minimum, and maximum instance counts.
Desired capacity = 1 Minimum capacity = 1 Maximum capacity = 2 And Click = Next
6 Set Up Notifications (Optional):
Notifications can be set up to notify you of scaling events. Email, SMS, and other destinations can receive these updates via Amazon SNS (Simple Notification Service).
Click = Next
7 Test Auto Scaling:
1 Manually start scaling events by simulating traffic or load spikes to make sure your system behaves as you anticipate.
2 Watch how the Auto Scaling group changes the number of instances it has based on the policies you’ve set..
Click = Next
8 Monitoring and upkeep:
1 Keep a close eye on the performance of your Auto Scaling group and modify scaling rules as necessary to meet your application’s needs.
2 Your instances’ health should be monitored, and any sick instances should be replaced immediately..
To use Terraform to construct an Amazon S3 bucket, you must define an appropriate resource block in your Terraform setup. Here’s a step-by-step tutorial on creating an S3 bucket with Terraform:
1 Configure AWS Credentials:
Before you continue, make sure you have your AWS credentials set up. You can use the AWS CLI aws configure command or specify them as environment variables.
2 Follow these steps to create a Terraform configuration:
Create a.tf file (for example, main.tf) to define your Terraform setup.
3 Define the S3 Bucket: Add the following Terraform code to your main.tf file to define an S3 bucket resource:
Replace “highskybucket” with your S3 bucket’s unique name. Bucket names must be globally distinct throughout AWS.
4 Launch Terraform:
To launch Terraform, browse to the directory containing your Terraform configuration file and execute the following command:
terraform init
5 Plan the Configuration: In HashiCorp Terraform, the terraform plan command generates an execution plan outlining the modifications Terraform will make to your infrastructure based on your existing configuration. Without actually making the changes, it demonstrates to you what steps Terraform will take to create, update, or remove resources, for example. By doing so, you may examine and confirm the modifications before implementing them in your infrastructure.
terraform plan
6 Apply the Configuration: Run the following command to create the AWS s3 Bucket:
terraform apply
7 Review and Confirm: Terraform will display a plan of what it aims to build. After reviewing the plan, type yes to confirm and create the AWS s3 Bucket.
1 Once you have logged in, go to the EC2 service to start it up. Either use the top search bar to look for “EC2” or browse the “Compute” section to find it.
2 Start a new instance: To begin building a new EC2 instance, click the “Launch Instance” button.
3 You will be asked to select an Amazon Machine Image (AMI) during the instance launch wizard. To find the Windows Server AMI of your choosing, select the “AWS Marketplace” tab and conduct a search. There are several versions offered, including Windows Server 2019, 2016, and others. By clicking the “Select” button, you can choose the AMI that best meets your needs.
( 1 ) Instance = highsky-windows-server1
( 2 ) Application and OS Images (Amazon Machine Image) = Windows. Microsoft
4. Select a type of instance: Depending on the resources and performance you require, choose the instance type. You have the option of selecting a general-purpose instance or a particular instance type. After making your choice, pick “Next: Configure Instance Details” from the menu.
5. Configure instance details: Various settings, including the number of instances, network configurations, storage, security groups, and more, can be made in this area. When you have finished, click the “Next: Add Storage” option. Adjust the settings as necessary.
6. Add storage by setting up your instance’s storage options. If necessary, you can change the default storage size or add more volumes. After making your modifications, select “Next: Add Tags” from the menu.
7. You can optionally add tags to your instance to improve management and organisation. Key-value pairs called tags are a way of identifying and classifying resources. When you’re ready, press the “Next: Configure Security Group” button.
8. Setting up a security group will allow you to manage the inbound and outgoing traffic to your instance. A new security group can be made, or you can choose an existing one. For your Windows Server instance, make sure to enable inbound traffic that is required, such as Remote Desktop Protocol (RDP) for remote access. When you’re done, press the “Review and Launch” button.
9. Review everything you’ve configured for your instance before launching it. Click the “Launch” button if everything appears to be in order.
10. Choose or create a key pair: If you don’t already have a key pair, you’ll be asked to do so. For safe login to your instance, key pairs are utilised. Save the private key file (.pem) that you downloaded in a secure location. After downloading the key pair, select “Launch Instances” from the menu.
11. Launch status: A notification confirming that your instances are launching will appear. To access the EC2 dashboard, click the “View Instances” option.
12. Connect to your instance by choosing it from the list and clicking the “Connect” button after it has started operating. To connect to your Windows Server instance using Remote Desktop Protocol (RDP), adhere to the recommended steps.
I’m done now! AWS has acknowledged your successful creation of a Windows Server EC2 instance. Now you can use it for the settings and applications you like.
Connect to your Windows instance using RDP
1. When utilising Remote Desktop to login to your Windows instance, you must first locate the initial administrator password and then input it. After the instance launches, it takes some time before this password becomes accessible.
2. The name of the administrator account is determined by the operating system’s language. For instance, the correct term is Administrator for English, Administrator for French, and Administrator for Portuguese. See the Microsoft TechNet Wiki for more details.
3. You can access your instance using the domain credentials you’ve specified in AWS Directory Service if you’ve joined your instance to a domain. Use the administrator’s fully qualified user name on the Remote Desktop login screen in place of the local computer’s name and the generated password.
4. When the instance was launched, you created a private key (.pem) file; select Browse and go to that location. To copy the whole contents of the file to this window, choose the file and then select Open.
5. Decrypt Password is your option. In place of the previous Get password link, the console now shows the instance’s default administrator password under Password. Keep the password in a secure location. In order to connect to the instance, you must enter this password.
6. Select Download file from remote desktop. You are given the option to open or save the RDP shortcut file using your browser. To get back to the Instances page when you have completed downloading the file, select Cancel.
( 1 ) You would see the Remote Desktop Connection dialogue box if you opened the RDP file.
( 2 ) If you saved the RDP file, open it by going to your downloads directory and clicking it to bring up a dialogue box.
7. You can be informed that the remote connection’s publisher is unknown. To maintain your connection to your instance, select Connect.
8. The default selection is the administrator account. The saved password should be copied and pasted.
9. You might see a warning that the security certificate could not be validated because of the nature of self-signed certificates. Use the next several steps to confirm the remote computer’s identification, or just select Yes (Windows) or Continue (Mac OS X) if you believe the certificate.
Bucket owner enforced – Bucket and object ACLs are disabled, and you, as the bucket owner, automatically own and have full control over every object in the bucket. Access control for your bucket and the objects in it is based on policies such as AWS Identity and Access Management (IAM) user policies and S3 bucket policies Objects can be uploaded to your bucket only if they don t specify an ACL or if they use the bucket-owner-full-control canned ACL.
Block Public Access settings for this bucket
Public access is granted to buckets and objects through access control lists (ACLs), bucket policies, access point policies, or all. In order to ensure that public access to this bucket and its objects is blocked, turn on Block all public access. These settings apply only to this bucket and its access points. AWS recommends that you turn on Block all public access, but before applying any of these settings, ensure that your applications will work correctly without public access. If you require some level of public access to this bucket or objects within, you can customize the individual settings below to suit your specific storage use cases
Bucket Versioning
Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both unintended user actions and application failures.
Disable
( choose the Disable )
Default encryption
The default encryption configuration of an S3 bucket is always enabled and is at a minimum set to server-side encryption with Amazon S3-managed keys (SSE-S3). With server-side encryption, Amazon S3 encrypts an object before saving it to disk and decrypts it when you download the object. Encryption doesn’t change the way that you access data as an authorized user. It only further protects your data. You can configure default encryption for a bucket. You can use either server-side encryption with Amazon S3 managed keys (SSE-S3) (the default) or server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS).
Encryption key type
Amazon S3 managed keys (SSE-S3)
AWS Key Management Service key (SSE-KMS)
( Choose the Amazon S3 managed keys (SSE-S3) )
Bucket Key = Enabel
Step 7: Click on Create Bucket.
If the bucket is created successfully, you will see a message like this on the top of the page:
2 ( highsky2 )
2 Creating an IAM (Identity and Access Management) service in AWS (Amazon Web Services) can be done by following these steps:
( 1 Create a Policy ) 2 ( Create User )
1 Create Policy ( 1 ) Go to the IAM service by searching for it in the search bar or selecting it from the list of services.
1. Go to the IAM service by searching for it in the search bar or selecting it from the list of services.
2. Once in the IAM console, click on the “Policies” tab in the left-hand menu.
3. Click the “Create policy” button.
4. Choose either the “Visual editor” or the “JSON” tab to create the policy.
5. choose the Visual editor tab to select the service the policy will apply to and then choose the actions and resources the policy will allow or deny
( 1 ) Select the “JSON” tab.
( 2 ) Define the policy document using the JSON syntax. The policy document specifies the permissions and resources that the policy grants or denies.
( 3 ) Make sure to include the necessary actions, resources, and conditions according to your requirements.
( 4 ) Click on the “Review policy” button.
( 5 ) Provide a name and optional description for your policy.
( 6 ) Review the policy details and click on the “Create policy” button to finalize it.
Once the policy is created, you can attach it to a user, group, or role in IAM. When the user, group, or role tries to access a resource, the policy will be checked to determine whether the action is allowed or denied.
It’s important to test your policy to ensure that it’s providing the intended access and restrictions. can do this by using the Simulate policy feature in the IAM console, which lets you simulate a policy to see how it would apply in different scenarios
2 Create User
1. Once in the IAM console, click on the “Users” tab in the left-hand menu.
2. Click the “Add user” button.
3. Enter a name for the new user and select the “Programmatic access” checkbox to give the user access to AWS via APIs, CLI, and SDKs.
4. Password ( Harry@123 )
5. Click “Next: Permissions” to assign the user permissions.
Click ( Next )
6. Click Create User to create a new user.
Once the user is created, you’ll be provided an Access Key ID and a Secret Access Key, which you can use to programmatically access AWS services. Be sure to keep these credentials safe, as they provide access to your AWS resources.
Click Download .csv file
Login Harry user
Go To the S3 service
1 highsky1 ( He has full permission, he can upload data in this and also delete )
1. Open the AWS Management Console: Go to the AWS Management Console and log in to your AWS account.
2. Choose RDS: From the list of AWS services, choose RDS (Relational Database Service).
3. Click “Create Database”: On the RDS dashboard, click the “Create database” button.
4. Choose a database engine: Select the engine you want to use for your RDS instance. Amazon RDS supports various database engines like MySQL, PostgreSQL, Oracle, SQL Server, MariaDB, etc.
5 Choose a use case: Select the use case that best fits your needs. This will determine the default settings for your RDS instance, such as the instance class, storage type, and allocated storage.
6 . Configure the instance: Configure the RDS instance by specifying its name, username, and password. You can also choose the instance type, storage type, allocated storage, and other settings based on your requirements.
7. Configure advanced settings: If needed, you can configure advanced settings such as backup retention, maintenance window, security groups, and VPC settings.
8. Launch the instance: After configuring all the settings, review your configuration and click “Create Database” to launch your RDS instance.
9. Please wait for the instance to launch: It may take several minutes for your RDS instance to launch. Once it is ready, you can connect to it using the endpoint provided in the AWS Management Console.
That’s it! You have now created an RDS instance in AWS. You can use this instance to host your database and connect to it from your applications.
IAM service policy
1. Open the IAM Management Console: Go to the AWS Management Console and log in to your AWS account. From the list of AWS services, choose “IAM” under “Security, Identity & Compliance”.
2. Create a new policy: In the left-hand navigation pane, click “Policies”, then click “Create policy”.
3. Select a policy template: On the Create Policy page, you can either create your custom policy or use a pre-defined policy template. To create a policy for RDS, you can select the “Amazon RDS” service from the list of available services.
4. Choose the actions: Next, you need to choose the actions that you want to allow or deny for this policy. For example, you might want to allow read-only access to RDS resources or grant permissions to create and modify RDS resources.
6. Choose the resources: Once you have selected the actions, specify the RDS resources to which this policy applies. You can choose to apply the policy to all resources or specify individual resources by ARN (Amazon Resource Name).
1 db Represents a DB instance that is an isolated database environment running in the cloud
Click to restrict access.
Click Theis Account
( 1 ) Resource Region
ap-south-1a
( 2 ) Resource db instance name
database-1
And Click ( Add ARNs )
2 Snapshot Represents a snapshot that is a backup of the storage volume of your DB instance
to restrict access.
Click Theis Account
( 1 ) Resource Region
ap-south-1a
( 2 ) Resource snapshot name
Highsky-Snapshot-name
And Click ( Add ARNs )
( 3 ) And Chick Any in this account
Next
7. Review and create the policy: After specifying the actions and resources, review the policy details and click “Create policy” to save the policy.
8. Attach the policy to a user or group: Once you have created the policy, you need to attach it to a user or group that needs access to RDS resources. You can do this by navigating to the user or group in the IAM console, clicking on the “Permissions” tab, and then attaching the policy to the user or group.
That’s it! You have now created an IAM service policy for RDS and attached it to a user or group. The user or group can now perform the allowed actions on the specified RDS resources.
IAM service role
1. Navigate to the IAM dashboard.
2. Click on “Roles” from the left-hand menu.
3. Click on the “Create role” button.
4. Choose the type of trusted entity for your role: an AWS service, another AWS account, or a web identity provider.
Use case Allow an AWS service like EC2, Lambda, or others to perform actions in this account.
Click The Lambda
5. Select the policies that define the permissions for your role. You can choose from existing policies or create a custom one.
6. Give your Role a name and description.
7. Review your role and click “Create role” to save it.
That’s it! You have now created an IAM service role in AWS. You can use this role to grant permissions to an AWS service or other entities that need to perform actions on your behalf.
Lambda function
1. Navigate to the Lambda dashboard.
2. Click on the “Create function” button.
3. Choose the type of function you want to create. You can create a function, blueprint, or serverless application repository from scratch.
4. Give your function a name and description.
5. Choose a runtime for your function, such as Python, Node.js, or Java.
( A runtime is a version of a programming language or framework that you can use to write Lambda functions. Lambda supports runtime versions for Node.js, Python, Ruby, Go, Java, C# (.NET Core), and PowerShell (.NET Core)
To use other languages in Lambda, you can create your own runtime.
Note that the console code editor supports only Node.js, Python, and Ruby. If you choose a compiled language, such as Java or C#, you edit and compile your code in your preferred SDE and upload a deployment package to the function. )
Taking by Python 3.1
6. Configure the function’s execution role, which determines the permissions that the function has to access AWS resources.
7. Write your function code or upload a ZIP file containing your code.
8. Set up your function’s environment variables and any additional settings, such as memory and timeout settings. Click “Create function” to save your Lambda function.
After creating your Lambda function, you can test it manually or set up a trigger to invoke it automatically. You can also monitor your function’s performance and troubleshoot any errors using the AWS Lambda console.
CloudWatch
1. Navigate to the CloudWatch dashboard.
2. Click on “Events” from the left-hand menu.
3. Click on the “Create rule” button.
4. Choose the “Schedule” option under “Event Source”.
Click Continue To create rule
5. Configure the croon expression for when you want the RDS DB instance to start. For example, if you want it to start every day at 8 pm, you would use the expression 30 12 * * ? *
6. Choose the EC2 instance as the target for the event rule.
7. Configure the specific action that you want to perform on the RDS DB instance, which in this case is to start it.
8. Give your rule a name and description.
9. Click “Create rule” to save your CloudWatch event rule.
After creating your CloudWatch event rule, it will trigger at the scheduled times and start the specified EC2 instance. Be sure to test your rule to ensure it is working as expected.
An AWS account—for example, Account A—can grant another AWS account, Account B, permission to access its resources such as buckets and objects. Account B can then delegate those permissions to users in its account. In this example scenario, a bucket owner grants cross-account permission to another account to perform a specific bucket operation
This article helps you navigate this minefield, with details not only of how the S3 permissions work but also of how you can implement some common real-world scenarios such as S3 bucket access from another AWS account.
Account A administrator user attaches a bucket policy granting cross-account permissions to Account B to perform specific bucket operations. Note that the administrator user in Account B will automatically inherit the permissions.
Account B administrator user attaches user policy to the user delegating the permissions it received from Account A
The user in Account B then verifies permissions by accessing an object in the bucket owned by Account A.
First, we need to create an s3 Bucket steps are given below:
Step 1: Log on to your AWS Console
Step 2: go to the Search bar ” S3 services
Step 3: Click on S3 Scalable Storage in the Cloud” and proceed further
Step 4: Create a new Bucket
In the general configuration category:
Step 5: Enter the bucket name (cross-account bucket )
Bucket owner enforced – Bucket and object ACLs are disabled, and you, as the bucket owner, automatically own and have full control over every object in the bucket. Access control for your bucket and the objects in it is based on policies such as AWS Identity and Access Management (IAM) user policies and S3 bucket policies Objects can be uploaded to your bucket only if they don t specify an ACL or if they use the bucket-owner-full-control canned ACL.
Block Public Access settings for this bucket
Public access is granted to buckets and objects through access control lists (ACLs), bucket policies, access point policies, or all. In order to ensure that public access to this bucket and its objects is blocked, turn on Block all public access. These settings apply only to this bucket and its access points. AWS recommends that you turn on Block all public access, but before applying any of these settings, ensure that your applications will work correctly without public access. If you require some level of public access to this bucket or objects within, you can customize the individual settings below to suit your specific storage use cases
Bucket Versioning
Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both unintended user actions and application failures.
Disable
Enable
( choose the Disable )
Default encryption
The default encryption configuration of an S3 bucket is always enabled and is at a minimum set to server-side encryption with Amazon S3-managed keys (SSE-S3). With server-side encryption, Amazon S3 encrypts an object before saving it to disk and decrypts it when you download the object. Encryption doesn’t change the way that you access data as an authorized user. It only further protects your data. You can configure default encryption for a bucket. You can use either server-side encryption with Amazon S3 managed keys (SSE-S3) (the default) or server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS).
Encryption key type
Amazon S3 managed keys (SSE-S3)
AWS Key Management Service key (SSE-KMS)
( Choose the Amazon S3 managed keys (SSE-S3) )
Bucket Key = Enabel
Step 7: Click on Create Bucket.
If the bucket is created successfully, you will see a message like this on the top of the page:
Step 8: click bucket name
Step 9: Go to the Permissions tab.
Step 10: Bucket policy
The bucket policy, written in JSON, provides access to the objects stored in the bucket. Bucket policies don’t apply to objects owned by other accounts
and click Edit
Policy generator
Step 1: Select Policy Type
A Policy is a container for permissions. The different types of policies you can create are an IAM Policy, an S3 Bucket Policy, an SNS Topic Policy, a VPC Endpoint Policy, and an SQS Queue Policy.
S3 bucket policy
Add Statement(s)
Step 2: Add Statement(s)
A statement is the formal description of a single permission. See a description of elements that you can use in statements.
Step 3: Principal = arn:aws:iam::ACCOUNT-B-ID:root
Step 5:Amazon Resource Name (ARN) = arn:aws:s3:::cruse-account-s3-buckee (Bucket ARN)
Step 6: Click [ Add statement ]
Create the same again as above but put /* on bucket ARN
Step 1: Select Policy Type
A Policy is a container for permissions. The different types of policies you can create are an IAM Policy, an S3 Bucket Policy, an SNS Topic Policy, a VPC Endpoint Policy, and an SQS Queue Policy.
Add Statement(s)
Step 2: Add Statement(s)
A statement is the formal description of a single permission. See a description of elements that you can use in statements.
Step 3: Principal = arn:aws:iam::ACCOUNT-B-ID:root
Step 2: Navigate to the IAM Dashboard by clicking on the “Services” dropdown menu, selecting “SecuritNy, Identity, & Compliance,” and then clicking on “IAM.”
Step 2: Click on “Policies” in the left-hand menu and then click on the “Create policy” button.
Step 3: After generating paste it into Identity and Access Management (IAM) | Click on
Policy | Click on Create Policy | Click on JSON | Click on Create Policy
Click next
Step 4: file policy name = cruse-account-policy
Click Create policy
Security credentials
Step 1: Access keys (1)Use access keys to send programmatic calls to AWS from the AWS CLI, AWS Tools for PowerShell, AWS SDKs, or direct AWS API calls. You can have a maximum of two access keys (active or inactive) at a time. Learn more
Step 2: Click create access key
Continue to create access key?
I understand creating a root access key is not a best practice, but I still want to create one
Next, click Create Access Key
Go to Windrose cmd
Download AWS CLI
Install and update requirements
We support the AWS CLI on Microsoft-supported versions of 64-bit Windows.
Admin rights to install software
Install or update the AWS CLI
To update your current installation of AWS CLI on Windows, download a new installer each time you update to overwrite previous versions. AWS CLI is updated regularly. To see when the latest version was released, see the AWS CLI version 2 Changelog on GitHub.
Step 1: Download and run the AWS CLI MSI installer for Windows (64-bit):
https://awscli.amazonaws.com/AWSCLIV2.msi
Step 2 : Alternatively, you can run the msiexec command to run the MSI installer.
Step 3: For various parameters that can be used, see msiexec on the Microsoft Docs website. For example, you can use the /qn flag for a silent installation.
Step 4: To confirm the installation, open the Start menu, search for cmd to open a command prompt window, and at the command prompt use the aws –version command.
If Windows is unable to find the program, you might need to close and reopen the command prompt window to refresh the path or follow the troubleshooting in Troubleshooting AWS CLI errors.