20 May 2023

How to reduce EBS volume size in AWS?

Elastic Block Store (EBS), a scalable block storage solution offered by AWS (Amazon Web Services), offers permanent storage for EC2 (Elastic Compute Cloud) instances. An EC2 instance can have an EBS volume attached to it as a virtual hard drive to store data.es:

The following are some significant traits and qualities of EBS volumes:

1. EBS volumes are made to last and offer long-term storage, therefore they are persistent. Even if the connected EC2 instance is stopped or terminated, the data on an EBS volume continues.

2. Elasticity: EBS volumes can be created in a variety of sizes, from 1 GB to 16 TB, and scaled up or down as necessary. This enables you to accommodate your applications’ fluctuating storage needs.

3. Performance: To accommodate a range of workloads, EBS volumes provide numerous performance options. Four different EBS volume kinds are available:

( 1 ) Suitable for a variety of tasks with balanced performance and cost is the general-purpose SSD (gp2/gp3).

( 2 ) SSDs with provisioned IOPS (io1/io2) are made for high-performance databases and applications that demand reliable and quick I/O.

( 3 ) Throughput Optimised HDD (st1): Designed for heavy streaming workloads with frequent access.

( 4 ) Old HDD (sc1): Designed for workloads that access cold data less frequently

4. Snapshots: EBS volumes support creating point-in-time snapshots, which are incremental backups of the volume’s data. Snapshots are stored in Amazon S3 and can be used to restore volumes or create new volumes.

5. Using the AWS Key Management Service (KMS), you can enable encryption at rest for your EBS volumes. This additional layer of protection further protects your data.

6. High Availability: To increase redundancy and availability, EBS volumes can be linked to an EC2 instance either inside the same availability zone (AZ) or across separate AZs.

7. Data management: As needed, EC2 instances can be quickly attached, detached, and reattached to EBS volumes. Additionally, you can copy volumes, take snapshots, and share snapshots among AWS accounts.

Databases, file systems, content repositories, and other sorts of applications frequently use EBS volumes to provide persistent storage on AWS. They provide versatility, toughness, and performance for data storage and access in the AWS cloud.

Following steps 

1. Open the AWS Management Console and log in.

2. Activate the EC2 service.

3. Create an ” EC2 ” instance

4. Connect to the instance ( puTTY )

5. Login to the root user
Command

[ec2-user@ip-172-31-2-182 ~]$ sudo -i

6.  Check for disk
Command

[root@ip-172-31-2-182 ~]# lsblk

By default disk 

Create Volume 

1. Click “Volumes” on the left navigation pane of the EC2 Dashboard.

2. Select “Create Volume” from the menu.

3. Enter the following information in the “Create Volume” dialogue box.

( 1 ) The preferred availability zone where you wish to establish the EBS volume should be chosen.

( 2 ) Volume Type: Based on your needs, select the appropriate volume type (e.g., magnetic, provisioned IOPS, or general-purpose SSD).

( 3 ) Size: Enter the volume’s size in gibibytes (GiB).

( 4 ) Snapshot: To construct a volume based on a snapshot, select it here if you have one. This step is not required.

( 5 ) Encryption: You can choose this option if you want to encrypt the volume. This step is not required.

You can add whatever tags you want for identification. This step is not required

Volume type

General Purpose SSD
General Purpose SSD (gp2 and gp3) volumes offer cost-effective storage that is ideal for a broad range of workloads.

Size (GiB)

10 (Gib)

Availability Zone
The Availability Zone in which to create the volume. After you create the volume, you can only attach it to instances that are in the same Availability Zone.

Note: Make sure that EBS and EC2 should be in the same availability zone.

ap-south-1b

And click Create volume

Successfully Create volume 

Attach volume 

4. The EBS volume can be created by selecting the “Create Volume” button.

You can attach a volume to an EC2 instance after it has been created by selecting it in the Volumes list and selecting “Actions” > “Attach Volume.” Name the device and specify the instance to which you want to attach the volume (for example, /dev/sdf).

Click Attach volume

Successfully attached volume  

 

Check for disk Connect to the instance ( puTTY )

[root@ip-172-31-2-182 ~]# lsblk

New disk attached  xvdf 10G

Create an LVM logical volume on a partition created on an EBS volume

Short description

1. You can allocate disc space as well as stripe, re-mirror, and resize logical volumes with LVM. You can assign a single EBS volume or a group of EBS volumes to one or more physical volumes using LVM.
Follow these procedures to extend the partitions and use LVM on your EBS volume:

2. To create a partition, use the fdisk command. Enter 8e00 as the variable’s Hex or GUID. The partition /dev/xvdf1 is created on /dev/xvdf in the example that follows.

[root@ip-172-31-2-182 ~]# fdisk /dev/xvdf
Welcome to fdisk (util-linux 2.37.4).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x5cd32037.

Command (m for help): n
Partition type

p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)

Select (default p): e
Partition number (1-4, default 1): ( Enter )
First sector (2048-20971519, default 2048): ( Enter )
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-20971519, default 20971519): ( Enter )

Created a new partition 1 of type 'Extended' and of size 10 GiB.

Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): 8e
Changed type of partition 'Extended' to 'Linux LVM'.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
[root@ip-172-31-2-182 ~]# yum install lvm2

To build a physical volume from the partition, use the pvcreate command. The example below generates a physical volume from the device /dev/xvdh1:

[root@ip-172-31-2-182 ~]# pvcreate /dev/xvdf1

Physical volume “/dev/xvdf1” successfully created.
Creating devices file /etc/lvm/devices/system.devices

Make volume groupings and include the actual volumes therein.

Create a volume group using the vgcreate command to merge the new physical volumes. One physical volume is used in the example below to build volume group datastore:

[root@ip-172-31-2-182 ~]# vgcreate datastore /dev/xvdf1

Volume group “datastore” successfully created

To construct logical volumes (partitions) from your volume group, use the lvcreate command. The example below builds a database with a 10GB logical volume from the datastore volume group:

[root@ip-172-31-2-182 ~]# lvcreate -n database -L 10G datastore

Created a logical volume named “database”

[root@ip-172-31-2-182 ~]# mkfs.ext3 /dev/datastore/database

mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 2621184 4k blocks and 655360 inodes
Filesystem UUID: 172211d1-022b-4d8f-85fa-8c44b223afb9
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

[root@ip-172-31-2-182 ~]# blkid /dev/datastore/database

/dev/xvdf1: UUID=”172211d1-022b-4d8f-85fa-8c44b223afb9″ SEC_TYPE=”ext2″ BLOCK_SIZE=”4096″ TYPE=”ext3″ PARTUUID=”5cd32037-01″

[root@ip-172-31-2-182 ~]# blkid /dev/xvdf1 >> /etc/fstab
[root@ip-172-31-2-182 ~]# vim /etc/fstab

/dev/xvdf1: UUID=”172211d1-022b-4d8f-85fa-8c44b223afb9″ SEC_TYPE=”ext2″ BLOCK_SIZE=”4096″ TYPE=”ext3″ PARTUUID=”5cd32037-01″

UUID=172211d1-022b-4d8f-85fa-8c44b223afb9 /mnt/data ext3 defaults  0  0

[root@ip-172-31-2-182 ~]# mkdir /mnt/data
[root@ip-172-31-2-182 ~]# mount -a
[root@ip-172-31-2-182 ~]# df -h

Filesystem Size Used Avail Use% Mounted on
devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs 475M 0 475M 0% /dev/shm
tmpfs 190M 2.8M 188M 2% /run
/dev/xvda1 8.0G 1.5G 6.5G 19% /
tmpfs 475M 0 475M 0% /tmp
tmpfs 95M 0 95M 0% /run/user/1000
/dev/xvda128 10M 1.3M 8.7M 13% /boot/efi
/dev/xvdf1 9.8G 92K 9.3G 1% /mnt/data

[root@ip-172-31-2-182 ~]# lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
xvda 202:0 0 8G 0 disk
├─xvda1 202:1 0 8G 0 part /
├─xvda127 259:0 0 1M 0 part
└─xvda128 259:1 0 10M 0 part /boot/efi
xvdf 202:80 0 10G 0 disk
└─xvdf1 202:81 0 10G 0 part /mnt/datab

Create Volume new small size volume

1. Click “Volumes” on the left navigation pane of the EC2 Dashboard.

2. Select “Create Volume” from the menu.

3. Enter the following information in the “Create Volume” dialogue box.

( 1 ) The preferred availability zone where you wish to establish the EBS volume should be chosen.

( 2 ) Volume Type: Based on your needs, select the appropriate volume type (e.g., magnetic, provisioned IOPS, or general-purpose SSD).

( 3 ) Size: Enter the volume’s size in gibibytes (GiB).

( 4 ) Snapshot: To construct a volume based on a snapshot, select it here if you have one. This step is not required.

( 5 ) Encryption: You can choose this option if you want to encrypt the volume. This step is not required.

You can add whatever tags you want for identification. This step is not required

Volume type

General Purpose SSD
General Purpose SSD (gp2 and gp3) volumes offer cost-effective storage that is ideal for a broad range of workloads.

Size (GiB)

5 (Gib)

Availability Zone
The Availability Zone in which to create the volume. After you create the volume, you can only attach it to instances that are in the same Availability Zone.

Note: Make sure that EBS and EC2 should be in the same availability zone.

ap-south-1b

And click Create volume

Successfully Create volume 

 

Attach volume 

6. The EBS volume can be created by selecting the “Create Volume” button.

You can attach a volume to an EC2 instance after it has been created by selecting it in the Volumes list and selecting “Actions” > “Attach Volume.” Name the device and specify the instance to which you want to attach the volume (for example, /dev/sdf).

Click Attach volume

Check for disk Connect to the instance ( puTTY )

[root@ip-172-31-2-182 ~]# lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
xvda 202:0 0 8G 0 disk
├─xvda1 202:1 0 8G 0 part /
├─xvda127 259:0 0 1M 0 part
└─xvda128 259:1 0 10M 0 part /boot/efi
xvdf 202:80 0 10G 0 disk
└─xvdf1 202:81 0 10G 0 part /mnt/database
xvdg 202:96 0 5G 0 disk

[root@ip-172-31-2-182 ~]# fdisk /dev/xvdg

 

Welcome to fdisk (util-linux 2.37.4).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x5955208c.

Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): e
Partition number (1-4, default 1): ( Enter )
First sector (2048-10485759, default 2048): ( Enter )
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-10485759, default 10485759): ( Enter )

Created a new partition 1 of type 'Extended' and of size 5 GiB.

Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): 8e
Changed type of partition 'Extended' to 'Linux LVM'.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

create an LVM logical volume on a partition created on an EBS volume

1. You can allocate disc space as well as stripe, re-mirror, and resize logical volumes with LVM. You can assign a single EBS volume or a group of EBS volumes to one or more physical volumes using LVM.
Follow these procedures to extend the partitions and use LVM on your EBS volume:

2. To create a partition, use the fdisk command. Enter 8e00 as the variable’s Hex or GUID. The partition /dev/xvdf1 is created on /dev/xvdf in the example that follows.

[root@ip-172-31-2-182 ~]# pvcreate /dev/xvdg1
[root@ip-172-31-2-182 ~]# vgcreate vgdatastore /dev/xvdg1
root@ip-172-31-2-182 ~]# lvcreate -n lvdatabase -L 5G vgdatastore
[root@ip-172-31-2-182 ~]# mkfs.ext3 /dev/lvdatastore/vgdatabase

mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 1310464 4k blocks and 327680 inodes
Filesystem UUID: 69463fb2-f6cd-48ba-a47b-62df5674daf0
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

[root@ip-172-31-2-182 ~]# blkid /dev/lvdatastore/vgdatabase

/dev/xvdg1: UUID=”69463fb2-f6cd-48ba-a47b-62df5674daf0″ SEC_TYPE=”ext2″ BLOCK_SIZE=”4096″ TYPE=”ext3″ PARTUUID=”5955208c-01″g1

[root@ip-172-31-2-182 ~]# blkid /dev/xvdg1 >> /etc/fstab
[root@ip-172-31-2-182 ~]# vim /etc/fstab

/dev/xvdg1: UUID=”69463fb2-f6cd-48ba-a47b-62df5674daf0″ SEC_TYPE=”ext2″ BLOCK_SIZE=”4096″ TYPE=”ext3″ PARTUUID=”5955208c-01″

UUID=69463fb2-f6cd-48ba-a47b-62df5674daf0 /mnt/database ext3 defaults 0 0

[root@ip-172-31-2-182 ~]# mkdir /mnt/databse
[root@ip-172-31-2-182 ~]# mount -a
[root@ip-172-31-2-182 ~]# df -h

Filesystem Size Used Avail Use% Mounted on
devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs 475M 0 475M 0% /dev/shm
tmpfs 190M 2.9M 188M 2% /run
/dev/xvda1 8.0G 1.5G 6.5G 19% /
tmpfs 475M 0 475M 0% /tmp
/dev/xvda128 10M 1.3M 8.7M 13% /boot/efi
/dev/xvdf1 9.8G 92K 9.3G 1% /mnt/data
tmpfs 95M 0 95M 0% /run/user/1000
/dev/xvdg1 4.9G 92K 4.6G 1% /mnt/database

[root@ip-172-31-2-182 ~]# lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
xvda 202:0 0 8G 0 disk
├─xvda1 202:1 0 8G 0 part /
├─xvda127 259:0 0 1M 0 part
└─xvda128 259:1 0 10M 0 part /boot/efi
xvdf 202:80 0 10G 0 disk
└─xvdf1 202:81 0 10G 0 part /mnt/data
xvdg 202:96 0 5G 0 disk
└─xvdg1 202:97 0 5G 0 part /mnt/database

copy data from old drive(data) to new drive(database)

[root@ip-172-31-2-182 data]# cp -avxH /mnt/data/* /mnt/database/

Please check all data copied from the old EBS drive to the new EBS drive

[root@ip-172-31-2-182 ~]# df -h

Change mount point from database to data

[root@ip-172-31-2-182 /]# vim /etc/fstab

Old FSTAB UUID entry.

UUID=eb777ba8-5231-49f6-bb99-13b67bc9b981 /mnt/data ext2 defaults 0 0
UUID=8206c47c-cdaa-4b48-80a5-035c80beb782 /mnt/database ext2 defaults 0 0

New FSTAB UUID entry.

#UUID=eb777ba8-5231-49f6-bb99-13b67bc9b981 /mnt/data ext2 defaults 0 0
UUID=8206c47c-cdaa-4b48-80a5-035c80beb782 /mnt/data ext2 defaults 0 0

[root@ip-172-31-2-253 /]# mount -a

And check the mount point 

[root@ip-172-31-2-253 /]# df -h

[root@ip-172-31-2-253 /]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs 475M 0 475M 0% /dev/shm
tmpfs 190M 2.8M 188M 2% /run
/dev/xvda1 8.0G 1.5G 6.5G 19% /
tmpfs 475M 0 475M 0% /tmp
/dev/xvda128 10M 1.3M 8.7M 13% /boot/efi
tmpfs 95M 0 95M 0% /run/user/1000
/dev/xvdg1 5.0G 22M 4.7G 1% /mnt/data

Go to EC2 Dashboard and click “Volumes”

Click 10 (Gib)  EBS and Detach the Volume

EBS 10 (Gib) delete volume

[root@ip-172-31-2-253 /]# lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
xvda 202:0 0 8G 0 disk
├─xvda1 202:1 0 8G 0 part /
├─xvda127 259:0 0 1M 0 part
└─xvda128 259:1 0 10M 0 part /boot/efi
xvdg 202:96 0 5G 0 disk
└─xvdg1 202:97 0 5G 0 part /mnt/data

Successfully Detach and delete 

Practical Implementations of DevOps: Case Studies Explored in Online Courses
18 May 2023

Practical Implementations of DevOps: Case Studies Explored in Online Courses

The term “DevOps” was created by combining the acronyms for the words “development” and “operations,” however, it refers to neither code nor a programming language, as many individuals believe. Within a single IT group or company, it is an idea or mindset that facilitates the collaboration between the development group and the operation team.

  • DevOps Implementation in an online trading company

There are instances when the two groups are combined into one. The financial trade firm’s testing, creation, and development approach was automated. You can learn more case studies when enrolling in DevOps Online Course in Ahmedabad. However, the deployment was completed in 45 seconds, leveraging DevOps. The length of the entire procedure was shortened, and customers’ levels of engagement improved.

  • Advance your skills with Ansible and DevOps

DevOps is made simpler using Ansible by automating the integration of internally created apps into production processes. Ansible is a highly well-liked DevOps tool for overseeing, automating, orchestrating, and customizing IT infrastructure. With the help of the Ansible Training and Certification Course in Ahmedabad, you will get greater insight into Ansible being an advanced tool for automation, management of systems, and DevOps. Still, it offers practical applications for regular users. Without the need for programming knowledge, Ansible enables you to set up a single machine and possibly a whole system of machines simultaneously.

  • Meet industry demands with Linux systems

Enrolling in Linux Certification in Ahmedabad can help you address the demands or needs of your company. You might want to certify your knowledge or expertise with Linux. In any case, performance-driven tests, practical tests, or a mix of these can be used to decide professional certificates. These techniques are intended to assess your aptitude for the duties expected of Linux administrators.

There are several ways to be prepared for a certification test. Still, training courses are well-liked since they might demonstrate to you real Linux-based corporate systems and the usual problems and activities you must learn. Several businesses provide Linux Administration Online Training in Ahmedabad that enables you to study while carrying out activities in an online Linux atmosphere to learn real-world applicability in an educational setting.

Highsky IT Solutions offers the most effective courses to help you gain the knowledge to succeed as a DevOps engineer. They also demonstrate the business’s development team’s perspective, ability to collaborate, operation of product development, and a few tools businesses employ to safeguard the effectiveness of their web applications.

17 May 2023

How to enable Multi-Factor Authentication (MFA) Root Account

An essential security precaution is setting up Multi-Factor Authentication (MFA) for your AWS root account. By requiring an additional verification step when logging into your AWS account, MFA adds an extra layer of security. The following describes how to set up MFA for your AWS root account:

Step 1: Use your root account login information to access the AWS Management Console.

Step 2: Select “My Security Credentials” from the dropdown menu by clicking on your account name or number in the top navigation bar. The page titled “Security Credentials” will then be shown.

Step 3: Scroll down to the “Multi-Factor Authentication (MFA)” section on the “Security Credentials” page, and then select the “Manage MFA” option.

Step 4: Select “Continue” from the menu on the “Manage MFA Device” page to begin the configuration procedure.

Step 5: You will be given the option of selecting an MFA device. These are your two choices:

1. Virtual MFA device: This method generates MFA codes using a smartphone app, such as Google Authenticator or Authy.

2. U2F security key: With this method, MFA is provided by means of a physical security key, such as a YubiKey.

Select the “Continue” button after making your selection according to your preferences.

Step 6: To configure your chosen MFA device, adhere to the on-screen instructions:

1.  Install a compatible app on your smartphone, then scan the QR code provided by AWS to use a virtual MFA device. You must input the 6-digit authentication number generated by the app on the AWS page.

2. You must place a U2F security key into a USB port and then push the key’s button when instructed to do so. page.

Step 7: AWS will give you backup codes when you set up your MFA device. In the event that you lose control of your MFA device, these codes are crucial. Keep these codes in a safe place at all times.

Step 8: You will be prompted to enable MFA for your root account once the setup process is complete. Select “Activate MFA” from the menu.

Step 9: You will be prompted by AWS to sign in once more with your root account credentials, but this time you must additionally enter the MFA code from your device.

Step 10: Your AWS root account will be secured by this additional security precaution after successfully signing in with MFA.

To guarantee you can access your AWS account in the future, keep your MFA device (smartphone or U2F security key) and the backup codes in a secure location.

1. Select “My Security Credentials” from the dropdown menu by clicking on your account name or number in the top navigation bar. The page titled “Security Credentials” will then be shown.

2.  Scroll down to the “Multi-Factor Authentication (MFA)” section on the “Security Credentials” page, and then select the “Manage MFA” option.

and click Assing MFA device

Specify MFA device name  

3.

Enter a meaningful name to identify this device.

 MFA device to use, 

                 Authenticator app                                Authenticator app
Authenticate using a code generated by an app installed on your mobile device or computer.

4. Install a compatible app on your smartphone, then scan the QR code provided by AWS to use a virtual MFA device. You must input the 6-digit authentication number generated by the app on the AWS page.

Go to the mobile play Store then search Google Authenticator and install the app  

Click Add MFA

Successful MFA device assigned 

Log out of your account 

1. Then log in to your account  

2. And your email file 

2. your password file 

3. Google Authenticator (MFA) = Code 

Successful enable Multi-Factor Authentication (MFA) Root Account

Building a Productive and Efficient Data Science Team with the DevOps Culture
05 May 2023

Building a Productive and Efficient Data Science Team with the DevOps Culture

The data science industry is expanding swiftly, and more organizations recognize the advantages of hiring people with data science skills. The past three-year timeframe saw a 75 percent spike in employment advertisements for data scientists. More people are taking steps to learn data science to differentiate themselves from other candidates and follow this potentially beneficial profession. Getting the appropriate training is essential if you’re interested in establishing a data science position.

  • Gain professional advantages by learning DevOps

Studying DevOps can be extremely helpful for everyone in the software development field, whether in the operations or design departments. You may participate in a DevOps Online Course in Ahmedabad, considering your expertise and skill phase to enjoy professional advantages like shorter manufacturing processes, higher delivery rates, better teamwork and interaction, and more robotics productivity and collaboration with skilled programmers.

  • Boost your data science skills with Docker courses

Data scientists may benefit from learning Docker by enrolling in the Docker Certification Course in Ahmedabad. It enables them to quickly handle connections and situations, ensuring their software functions reliably on many platforms. Additionally, it frees them from having to depend entirely on the DevOps group. The portability of dockers enables quicker project software launch since several data scientists may easily help develop them.

  • Enhance your skills with Python Courses

Many people choose Python because of its accessibility, but data scientists find it even more enticing because of its wide range of excellent libraries. With the addition of libraries throughout time, Python has become more sophisticated and efficient. You can enroll in Python Courses in Ahmedabad and learn about selecting a library perfect for your Data Science requirements.

  • Enhance your career with data science training

Data science training makes your ability to meet the growing need for Big Data expertise and technologies easier. Professionals that have completed Data Science Training Ahmedabad are equipped with data management tools. An extra benefit for an applicant for an enhanced and successful career is if they are knowledgeable of and proficient in these important data abilities.

Enrolling in a data science training program will provide you with every detail you require to succeed in the industry, including the basics to advanced abilities. This is the initial step in obtaining certification as a data scientist. A well-known provider of computer technologies and services, Highsky IT Solutions also offers a range of IT certifications and training courses in cloud computing, open-source programming, networking, privacy and security, and data science.

28 April 2023

Start Instance lambda function with Event bridge

 

1. go to search for the IAM dashboard

2. Click on Policies in the left-hand navigation pane.

3. Click on the Create policy button.

4. Choose either the JSON & Visual editor tab depending on your preference for creating the policy

5. Create a policy using either the JSON code editor or the visual editor.

6. Give your policy a name and description.

7. Click on the Create Policy button to save your policy

Here’s an example of a simple IAM policy created using the visual editor:

(IAM service) policy

1. Click on the “Visual Editor” tab.

2. Click on “Create Policy”.

3. Select services, actions, and resources.

4. Choose whether to grant or deny permissions.

5. Create your policy using either the JSON code editor or the visual editor.

6. Define your policy by adding the required statements. Each statement must include an action, resource, and effect. The effect must be set to “Deny” to explicitly deny access to the specified resources.


7. Review the policy summary.

8. Provide a name and a description.

9. Click on the “Create policy” button.

Once you’ve created your policy, you can attach it to one or more IAM users, groups, or roles to grant or restrict their access to AWS resources.

IAM (serves) Roles

1. Navigate to the IAM dashboard.

2. Click on “Roles” from the left-hand menu.

3. Click on the “Create role” button.

4. Choose the type of trusted entity for your role: an AWS service, another AWS account, or a web identity provider.

5. Choose the use case that best fits your scenario, such as EC2 or Lambda.

 

6. Select the policies that define the permissions for your role. You can choose from existing policies or create a custom one.

7. Give your role a name and description.

8. Review your role and click “Create role” to save it.

After creating your role, you can assign it to an IAM user or group to permit them to access AWS resources. For example, you can assign the role to an EC2 instance to permit it to access other AWS resources. Be sure to test your role to ensure it provides the intended level of access.

Lambda function

1. Navigate to the Lambda dashboard.

2. Click on the “Create function” button.

3. Choose the type of function you want to create. You can create a function from scratch, a blueprint, or a serverless application repository.

4. Give your function a name and description.

5. Choose a runtime for your function, such as Python, Node.js, or Java.

6. Configure the function’s execution role, which determines the permissions that the function has to access AWS resources.

7. Write your function code or upload a ZIP file containing your code.

8. Configure your function’s triggers, which determine when the function is executed. As triggers, you can use AWS services such as S3, API Gateway, or CloudWatch Events.

9. Set up your function’s environment variables and any additional settings, such as memory and timeout settings.

10. Click “Create function” to save your Lambda function.

After creating your Lambda function, you can test it by invoking it manually or setting up a trigger to invoke it automatically. You can also monitor your function’s performance and troubleshoot any errors using the AWS Lambda console.

CloudWatch

1. Navigate to the CloudWatch dashboard.

2. Click on “Events” from the left-hand menu.

3. Click on the “Create rule” button.

4. Choose the “Schedule” option under “Event Source”.

 

5. Configure the croon expression for when you want the EC2 instance to start. For example, if you want it to start every day at 7 pm, you would use the expression 30 13 * * ? *.

6. Choose the EC2 instance as the target for the event rule.

7. Configure the specific action that you want to perform on the EC2 instance, which in this case is to start it.

8. Give your rule a name and description.

9. Click “Create rule” to save your CloudWatch event rule.

After creating your CloudWatch event rule, it will trigger at the scheduled times and start the specified EC2 instance. Be sure to test your rule to ensure it is working as expected.

 

25 April 2023

AWS: Deny user to Delete Object & Put Object from S3 Bucket

First, we need to create an s3 Bucket steps are given below:

 Step 1: Log on to your AWS Console.
Step 2: go to the Search bar  ” S3 services “

Step 3: Click on S3  Scalable Storage in the Cloud” and proceed further

Step 4: Create a new Bucket

In the general configuration category:

Step 5: Enter the bucket name  (delete-object-put object bucket in our case) 

Step 6: Next, choose the  AWS region,  [Asia Pacific (Mumbai) ap-south-1].

ACLs disabled (Recommended)

Bucket owner enforced – Bucket and object ACLs are disabled, and you, as the bucket owner, automatically own and have full control over every object in the bucket. Access control for your bucket and the objects in it is based on policies such as AWS Identity and Access Management (IAM) user policies and S3 bucket policies Objects can be uploaded to your bucket only if they don t specify an ACL or if they use the bucket-owner-full-control canned ACL.
Block Public Access settings for this bucket
Public access is granted to buckets and objects through access control lists (ACLs), bucket policies, access point policies, or all. In order to ensure that public access to this bucket and its objects is blocked, turn on Block all public access. These settings apply only to this bucket and its access points. AWS recommends that you turn on Block all public access, but before applying any of these settings, ensure that your applications will work correctly without public access. If you require some level of public access to this bucket or objects within, you can customize the individual settings below to suit your specific storage use cases

Bucket Versioning

Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both unintended user actions and application failures.

Disable

( choose the Disable )

Default encryption

The default encryption configuration of an S3 bucket is always enabled and is at a minimum set to server-side encryption with Amazon S3-managed keys (SSE-S3). With server-side encryption, Amazon S3 encrypts an object before saving it to disk and decrypts it when you download the object. Encryption doesn’t change the way that you access data as an authorized user. It only further protects your data. You can configure default encryption for a bucket. You can use either server-side encryption with Amazon S3 managed keys (SSE-S3) (the default) or server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS).

Amazon S3 managed keys (SSE-S3)

( Choose the  Amazon S3 managed keys (SSE-S3) )

Bucket Key = Enabel

Step 7: Click on Create Bucket.

If the bucket is created successfully, you will see a message like this on the top of the page:

Creating an IAM (Identity and Access Management) service in AWS (Amazon Web Services) can be done by following these steps:

1. Go to the IAM service by searching for it in the search bar or selecting it from the list of services.

2. Once in the IAM console, click on the “Users” tab in the left-hand menu.

3. Click the “Add user” button.

4. Enter a name for the new user and select the “Programmatic access” checkbox to give the user access to AWS via APIs, CLI, and SDKs.

5. Click “Next: Permissions” to assign the user permissions.

6. Choose an existing policy or create a new one that defines the user’s permissions.

7. Click “Review” to review the user’s information and permissions.

8. click Create User to create a new user.

Once the user is created, you’ll be provided an Access Key ID and a Secret Access Key, which you can use to programmatically access AWS services. Be sure to keep these credentials safe, as they provide access to your AWS resources.

Click Download .csv file

To create an IAM (Identity and Access Management) policy in AWS (Amazon Web Services), you can follow these steps:

1. Go to the IAM service by searching for it in the search bar or selecting it from the list of services.

2. Once in the IAM console, click on the “Policies” tab in the left-hand menu.

3. Click the “Create policy” button.

4. Choose either the “Visual editor” or the “JSON” tab to create the policy.

5. choose the Visual editor tab to select the service the policy will apply to and then choose the actions and resources the policy will allow or deny

Deny

6. choose the JSON tab, and enter the policy in JSON format. The JSON format must include a version, statement, and action.

7. then create the policy enter a name and description and click Create a policy

Once the policy is created, you can attach it to a user, group, or role in IAM. When the user, group, or role tries to access a resource, the policy will be checked to determine whether the action is allowed or denied.
It’s important to test your policy to ensure that it’s providing the intended access and restrictions.  can do this by using the Simulate policy feature in the IAM console, which lets you simulate a policy to see how it would apply in different scenarios.

Attach policy

1. Once in the IAM console, click on the “Users,” “Groups,” or “Roles” tab in the left-hand menu, depending on which entity you want to attach the policy to.

2. Select the user, group, or role that you want to attach the policy to.

3. Click on the “Permissions” tab, and then click on the “Attach policies” button.

4. In the search bar, type the name or description of the policy that you want to attach, and then select the policy from the list.

5. Click Attach policy to attach the policy to the selected entity

After attaching the policy, the user, group, or role will have the permissions granted by the policy. You can also create custom policies and attach them to entities as needed. Be sure to test your policies to ensure that they’re providing the intended access and restrictions.

Successful for Attach policies

 Login for user

Click for Amazon S3 servicers

Click  for bucket name  = Deleteobject-putobject-deny

Click  [ Upload ]


the below snaps, the user is not able to delete objects and not able to upload objects

 

 

Grab AWS Course To Skill As Architects & Developers!
28 March 2023

Grab AWS Course To Skill As Architects & Developers!

With the expansion in digitalization and modernization, there is a rapid increase in the usage of cloud computing to have ease in the workplace. In recent times Amazon Web Services (AWS) are one of the most widely used platforms for cloud-based operations. Today, cloud-based AWS Training and Certification in Ahmedabad typically helps to work with DevOps engineers, developers, and other technology team members to reach the most efficient solutions for all their business needs. A wide range of courses for architects and developers are available from AWS. These courses are designed to be flexible so you can learn at your own pace and cover everything from fundamental ideas to more complex subjects.

What Is the Course You Can Scale Through AWS?

Here are probably the most famous AWS courses for architects and developers that are considered the best ones in today’s world:-

  • Associate AWS Certified Architect

This course is intended for architects who want to learn how to design and implement fault-tolerant, scalable systems on Amazon Web Services. It deliberates AWS security and compliance, architecture, storage and databases, computing and networking.

  • AWS Designer

This course’s target audience is developers who are interested in developing and arraying AWS applications. It discusses AWS serverless technologies, messaging services, security and compliance, and compute storage and database services.

  • Professional AWS Certified Architect

Experienced architects who want to learn how to design and deploy courses and attain Cloud Computing Certifications in Ahmedabad with advanced systems on AWS should take this course. It discusses AWS migration, advanced architectures, high availability, scalability, compliance, scalability, and elasticity.

  • Professional DevOps Engineer with AWS Certification

Developers and professionals in operations who want to learn how to automate application deployment and management on AWS are considered suitable. The DevOps Classes and Training in Ahmedabad help the candidate to have a wide knowledge of AWS networking and hybrid architectures, security and compliance, continuous delivery and deployment, and monitoring and logging of the system.

  • Specialty

AWS Certified Advanced Networking courses offer the architects who want to learn how to design and implement advanced networking solutions. It also focuses on AWS organizing ideas, AWS VPN, AWS Direct Associate, AWS Highway 53, and AWS Security Training in Ahmedabad.

Conclusion

AWS courses are perceived by the business and can assist you with propelling your vocation. Employers place a high value on AWS certifications, which can help you stand out from other applicants when applying for jobs. These courses are an extraordinary way for architects and developers to successfully utilize AWS benefits. They cover everything from essential ideas to cutting-edge themes and are adaptable and intuitive.

At Highsky IT Solutions, AWS courses are designed to be interactive, allowing you to put what you learn into practice in real-world situations. They provide easy learning through online, classroom and corporate training modes and help to scale your skills and job prospects.

Linux Advanced - Process Management by Highskyit Solution
22 February 2023

Linux Advanced – Process Management by Highskyit Solution

Enrich your career by joining the Linux Advanced course by Highskyit Solution 

We live in a world where new technologies & tools emerge every year. To deal with the current marketing technology, you need to upgrade yourself! So, in this blog, we’ll focus on why Linux Advanced is the best platform to start your career. Continue this blog to understand the structure of the Linux Administration Management Course Ahmedabad.

What you’ll learn in this course?

If we talk about the Linux utilization trend, it is referred to as the platform, usually for servers. But recently, the Linux platform has undergone several interactions with UI modifications suitable for personal use. Multiple individuals are using this all across the globe. Do you want to enrich your career by getting a certification as an advanced professional in Linux; you can talk with the consultants of Highsky IT Solutions. 

When you join the course, you will get solid Linux skills & great understanding of Linux concepts. You will become a master in understanding all the essential Linux commands. You can also join the Kubernetes Course Ahmedabad. Once you complete this course, you can apply for various Linux jobs.

Why should you join such courses?

  • Receive High security: Security is the central aspect of acquiring knowledge in Linux. The Linux platform is more stable when compared to the other windows platform. It is used in various organizations for tightening up security & maintaining high-security standards. This is an open-source platform, so the users don’t need to pay a single cost.
  • Get high stability: The Linux platform is known for stability purposes. The best part about this platform is the users don’t experience frequent crashes that ultimately benefit the organization. The uptime for the Linux servers is high, and that’s why it is considered the highest number of servers running on the internet. You can also join Python Courses in Ahmedabad.
  • Easy to maintain: The platform of Linux is easy to maintain. There is no extra cost associated with this platform, and that’s why it is known as a user-friendly platform. This is getting popular among people because of the flexibility & ease of using the feature.

Current market trends of Linux:

When it comes to the Linux software or Microsoft Azure Certification Ahmedabad, there are a lot of scopes because it is an open-source platform. The opportunities after completing this course are high for the fresher also. This course is flexible for newbie’s. The salary range for positioning in this job role depends upon the location, organization type & years of experience you have.

If you also want to showcase your talent & want to add a certain level of excellence to your career, you can join this course.

The Best Networking Courses to Take in 2023
29 January 2023

The Best Networking Courses to Take in 2023

Regarding information technology, the infrastructure field is consistently regarded as one of the most researched and developing. The newest trend, which is certain to continue in the long run, is online enrollment in networking courses. If you are just starting in the IT industry and want to get your career there, the following trending and in-demand tech career options will help you land high-paying jobs. The following list of networking courses focused primarily on the most reputable and well-liked networking courses. It was compiled with job-seeking trends in the IT technology industry in mind.

  • Kubernetes Course

Kubernetes is an open-source platform for automating application container deployment, scaling, and operation. It has proven extremely beneficial to businesses and organizations because it automates various aspects of application development. Features like pod networks, service networks, cluster IPs, container ports, host ports, and node ports are all part of Kubernetes networking. This practical Kubernetes Course in Ahmedabad will teach you how to effectively use Kubernetes networking. You will be taught how to manage the complexities of networking through examples that are easy to understand, practical, and hands-on. You will begin by quickly going over the most important aspects of Kubernetes before moving on to the fundamentals of Kubernetes networking.

  • Docker Course

In this Docker course Ahmedabad, you will learn how to configure and manage service discovery and how to build and manage container networks. Modern cloud-native applications are based on containers and micro services, and network connectivity is essential for success. In this course, Managing Docker Networking, you will learn how to set up scalable service discovery, connect containers to existing corporate networks, and create new container networks.

  • AWS Security Certification

Amazon Web Services, Also known as AWS, is now one of the biggest cloud service and solution providers for businesses in all industries. For professionals and businesses to improve their security posture when business-critical services are hosted on the cloud, and hybrid cloud ecosystems, AWS Security Certification Ahmadabad training is essential.

  • Microsoft Azure Cloud Certification

The Microsoft Azure course—also known as the Cloud platform—offers a variety of Cloud Services, such as computing, analytics, storage, and networking, and is intended to bring new solutions to life. Additionally, Azure training is frequently utilized as a teaching platform for cloud database hosting. One of the many significant global public cloud service providers is Microsoft Azure Cloud Certification in Ahmedabad. IBM, Amazon Web Services (AWS), and Google Cloud Platform (GCP) are a few of the major providers.

Conclusion

The Network-as-a-Service (NaaS) market has grown thanks to the rise of big data analytics and the use of the cloud for data storage. In addition, NaaS is based on the rapid development and evolution of IoT technologies, which have increased the demand for networking professionals and provided a secure work-life balance during the pandemic. To begin your career in networking, Highsky it Solutions offers any of the courses mentioned above.

Enroll in a data science course with Highsky it in 2023
28 January 2023

Enroll in a data science course with Highsky it in 2023

The in-depth understanding of how large amounts of information flow through an organization’s repository are called “data science.” Technology, data interference, and algorithm development all come together in this. This combination’s aggregate result is used to solve extremely difficult analytical problems. The technology known as data science involves in-depth process analysis to collect large amounts of data and examine recurring patterns. In addition, Data Science Training Ahmadabad can deal with market aspects and help control and organize the organization concerning price and competition. However, there are some other courses that you can take with Data Science, some of which are discussed below:

  • Cloud Computing Training

Data science and cloud computing courses go hand in hand. Most of the time, a Data Scientist looks at different kinds of data stored in the cloud. Organizations are increasingly storing large data online as Big Data grows, necessitating the need for Data Scientists. A data scientist can learn how to use platforms like Windows Azure, which offers free and paid access to programming languages, tools, and frameworks, through Cloud Computing Training in Ahmedabad. Map Reduce data storage and retrieval tools like Hadoop, Pig, and Hive are typically familiar to data scientists. Programming is also done in other languages like Python and Java.

  • Python Courses

The Data Science with Python Certification Course will demonstrate your proficiency in Python-based data science and analytics methods. By taking this Python Course in Ahmedabad, you will acquire comprehensive knowledge of data analytics, machine learning, data visualization, web scraping, natural language processing, and Python programming fundamentals. This interactive, hands-on course will help you advance your career because Python is becoming an increasingly important skill for many positions in data science.

  • DevOps Online course

DevOps gets a few more responsibilities from data science teams. On the other hand, data engineering necessitates flawless collaboration between DevOps teams and data scientists. In addition, it is anticipated that the operators will supply clusters of Apache Airflow, Apache Hadoop, Apache Spark, Apache Kafka, and others to efficiently deal with data transformation and extraction. Data scientists then examine the transformed data for insights and correlations. Therefore you can also take a DevOps Online Course in Ahmedabad to enhance your career in Data Science.

Conclusion

Data science is one of today’s most challenging and potentially lucrative careers. Top universities and certification organizations are now offering a new Data Science course. A company can use well-structured, efficiently processed data as a significant resource. Starting an online Data Science course from Highsky it Solutions is the easiest way to learn about data science.

WhatsApp chat