Data Science Training Ahmedabad
08 November 2023

Want to Improve your Data Science skills? Join a Certification program for Career Growth

If you are interested in making a career in data science, then there is a high possibility to get a lucrative job role. There is a growing demand for skilled professionals in the IT industry. More and more business corporates are in need of data scientists who have updated competence in analyzing data. It can be standard or complex sets that help companies make more informed decisions by interpreting the data. Therefore, you should have Data Science Certification in Ahmedabad that proves your knowledge in data science knowledge and in which cases you can use them.

How joining Data Science Certification programs can be valuable?

Having a data science certificate might not be a comprehensive solution for all your IT skills needs. But if you have a career in the IT sector, it can help you get a better job role while growing your skills. You can grow your experience in understanding various data sets and stay updated on how to use and comprehend data that are required for current IT trends. So, you should improve your qualifications as per the needs of your job role as a data analyst. Data scientists can also gain more skills to use the right algorithm for business data training with Microsoft Azure Training & Certification in Ahmedabad. If you work with data on the AWS platform, you can get benefits from the AWS Security Training Course in Ahmedabad. You can learn about working with a range of cloud-based services.

Today, businesses in various industries depend more on data scientists to handle the increasing volume of data created and compiled. This is where data science can play an increasingly important role in a variety of sectors, offering a plethora of career possibilities. All you have to do is have the proper qualifications to become a data scientist. Hence, taking data science certification programs is worthwhile. HighSkyIT Solution offers data science certification sources that enable you to possess skills on both Azure and AWS. So you can effectively tackle all your data science projects.

Conclusion:

Data science is a dynamic field in the IT realm that offers chances for study and job advancements with ever-evolving technology and approaches. Hence, you should enroll in a data science certification program that enables you to stay current on the newest tools and trendy strategies. It will help you become a competitive and productive professional in the industry. Whether you want to become a data analyst or a big data engineer, to meet the need for evolving skills, taking on these certification courses can be helpful in career growth.

10 October 2023

What is Nodejs And install Nodejs

What is Node.js

An open-source server-side runtime environment for JavaScript called Node.js enables you to execute JavaScript code on the server. It is made for creating scalable and effective network applications, and it is widely used to create web applications, APIs, and other server-side programs.

1 JavaScript Runtime:- Node.js enables you to execute JavaScript code on the server, whereas JavaScript is typically associated with web browsers for client-side scripting.

2 Single-Threaded:- Although Node.js applications are single-threaded, they may effectively manage several concurrent tasks by using callbacks and asynchronous activities. For some applications, this may lead to better performance.

3 Event-Driven and Non-Blocking:- An event-driven, non-blocking I/O mechanism is the foundation of Node.js. This indicates that it can manage numerous connections simultaneously without delaying the execution of any code. Applications that demand a lot of concurrency and real-time communication are especially well suited for it.

4 Package Manager (npm):- The npm (Node Package Manager) ecosystem of open-source libraries and modules that come with Node.js allows developers to expand the capabilities of the framework and make the process of creating applications more straightforward. ion.

5 V8 JavaScript Engine:- The V8 JavaScript engine, created by Google and renowned for its great performance, is used by Node.js. V8 is quick and effective because JavaScript code is compiled into machine code. n.

6 Cross-Platform:- Node.js is extremely portable since it supports many different operating systems, including Windows, macOS, and many Unix-like platforms.

7 Large and Active Community:- A large number of resources, libraries, and tools are available for Node.js development because of the dynamic and active developer community that supports it.

The development of web servers and web applications, real-time applications like chat programs and online games, the creation of APIs, and the creation of command-line tools are just a few examples of common Node.js use cases. Its popularity has increased in part as a result of its adaptability in creating various applications and its ability to handle asynchronous I/O activities effectively.

install Node js

The Node.js official website provides a yum repository that must first be enabled on your system. Additionally, you require development tools to create native add-ons that may be installed on your system.

The default repositories of Rhel include a version of Node.js that can be used to deliver a consistent user experience across several platforms. Version 12.22.9 is what is currently available in the repository. While not the most recent version, this one should be reliable and sufficient for fast language testing.

1 You can use the yum package manager to obtain this version. First, update your local package index by typing:

yum update
yum install nodejs

To confirm installation, click Y when prompted. If you are asked to restart any services, hit ENTER to proceed with the default settings. Ask the node for its version number to confirm the installation was successful:

node -v

This is all there is to getting started with Node.js if the package in the repositories meets your needs. You should typically install npm, the Node.js package manager, as well. You can accomplish this by using yum to install the npm package:

2 You don’t need to install npm individually because the NodeSource nodejs package includes both the node binaries and npm.

By using apt and the NodeSource PPA, you have now successfully installed Node.js and npm. The installation and management of several Node.js versions are covered in the section that follows.

yum install npm

 

3 The Node Version Manager is Used to Install Node

The Node Version Manager, or nvm, is a further flexible method of installing Node.js. You can simultaneously install and maintain numerous independent Node.js versions and their corresponding Node packages with this piece of software.

The project’s GitHub page can be accessed to learn how to install NVM on a Rhel 9 computer. The README file is displayed on the main page. Copy the curl command from there. This will provide you with the installation script’s most recent version.

It is usually a good idea to audit the script to ensure it isn’t doing anything you disagree with before papering the command through to bash. Remove the | bash element from the end of the curl command to accomplish that:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh

 

Make sure you understand the adjustments it is making by taking a look. Run the command once more with | bash appended after you are finished. As of right now, the script can be downloaded and run by typing: The URL you use will change based on the most recent version of nvm.

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash

The nvm script will be installed on your user account as a result. You need to source your.bashrc file first before using it:

source ~/.bashrc

You can now inquire NVM about the Node versions that are offered:

nvm list-remote

The list is pretty lengthy! Any of the release versions you see can be typed to install a particular version of Node. For example, you can enter the following to obtain version v16.14.0 (another LTS release):

nvm install v16.14.0

If you type: you can view the various versions you have installed.

nvm list

 

27 September 2023

How To Create Auto Scaling In [ AWS ]

It takes a few steps to set up auto-scaling in AWS, and it’s commonly used to dynamically change the number of Amazon EC2 instances in a group to match shifting workloads. Here is a step-by-step tutorial for setting up auto-scaling on AWS:

1 Logging into the AWS Console:
Using the login information for your AWS account, access the AWS Management Console.

How TO  CERATE (AMI)
How to take AMI of EC2 and launch new EC2 using AMI

2 Selecting or Building an Amazon Machine Image (AMI)
The configuration of the EC2 instances you want to launch might be represented by an existing AMI or by a custom one that you generate.

3 Create a Launch Configuration:
1 Go to the EC2 Dashboard.
2 Select “Launch Configurations” from the left navigation pane’s “Auto Scaling” section.
3 The “Create launch configuration” button should be clicked.


Launch template name
= highsky_template
Template version description = template_highsky

4 Choose the AMI that you want.

Click = My AMIs
And Click Amazon Machine Image (AMI) [ Image Name ] = auto_image

5 Set up the instance type, key pair, security groups, and, if necessary, any user data scripts.

Choose the instance type = t2.micro

Choose you’re  Key =   – – – – – – 

Choose you’re Network Settings 

6 After reviewing the settings, click “Create launch configuration.

                             Create launch template

4 Create an Auto Scaling Group:
1 Go to the EC2 Dashboard and select “Auto Scaling Groups” from the left navigation pane after creating the launch configuration.


2 Press the “Create an auto-scaling group” button.

Auto Scaling group name = Auto_scaling_group
Launch template = highsky_template
Click = Next

3 The launch configuration you made in the previous stage should be selected.

VPC = ( Default VPC )
Availability Zones and Subnets = [ yure Choose )w
And Click = Next 

4 Configure advanced options – optional: [ Choose a load balancer to distribute incoming traffic for your application across instances to make it more reliable and easily scalable. You can also set options that give you more control over health check replacements and monitoring.]

I’m Choose = No load balancer

5 Health checks [ Health checks increase availability by replacing unhealthy instances. When you use multiple health checks, all are evaluated, and if at least one fails, instance replacement occurs.]

  Health check grace period = 180 Minute
  And Click = Next

6 Set the group’s desired capacity, minimum, and maximum instance counts.

Desired capacity = 1
Minimum capacity = 1
Maximum capacity = 2
And Click = Next

6 Set Up Notifications (Optional):
Notifications can be set up to notify you of scaling events. Email, SMS, and other destinations can receive these updates via Amazon SNS (Simple Notification Service).

Click  =  Next 

7 Test Auto Scaling:
1 Manually start scaling events by simulating traffic or load spikes to make sure your system behaves as you anticipate.
2 Watch how the Auto Scaling group changes the number of instances it has based on the policies you’ve set..

Click = Next

8 Monitoring and upkeep: 
1 Keep a close eye on the performance of your Auto Scaling group and modify scaling rules as necessary to meet your application’s needs.
2 Your instances’ health should be monitored, and any sick instances should be replaced immediately..

And Click = Create Auto Scaling groups 

Check-in Instances

                            Successfully Create Auto Scaling 

 

Docker Certification in Ahmedabad
15 September 2023

Get Your Docker Certification Demystified For Container Mastery

Do you want to get your Docker certification to get an industry-recognised credential? To get recognition, you must pass the Docker Certified Associate (DCA) exam. It’s time to start with a specific course to improve your docker skills. Courses for Docker Certification in Ahmedabad are available at competitive prices; along with professionals guide the candidates. Let’s know more about the certification course.

What You Will Achieve with the Certification Course?

  • Digital certificate and Docker Certified Associate logo.
  • Recognition of Docker skills with official Docker credentials.
  • Accessthe Docker Certified professional network.

While preparing for your Docker certification exam, you have to cover major concepts related to Docker skills to become a proficient developer, application architect,and system administrator. Here are the concepts you will cover;

Running Containerised Applications

You will learn to run containerised appsfrom pre-existing images. This concept will help you to improve your programming and development skills by enabling you to spin up dev environments. There are centres for DevOps Online Training Ahmedabad where you can learn this concept.

Deploying Images in the Cluster

Another major concept where you can learnto achieve continuous delivery is by deploying images in the cluster in the form of containers.

Installation and Maintenance of Docker platform

This concept will provide you with a clear insight into the Docker platform. Here, you will learn to install and operate the platform. Moreover, you will also get an idea of its maintenance and upgrades. It will provide you with an insight into the internals of Docker.

Configuration and Troubleshooting

In this concept, you will learn to configure and troubleshoot the Docker engine. There are prominent Cloud Computing Certifications Ahmedabad that also offer Docker certification courses, where all these concepts are covered. When you dive deep into the core topics of configuration and troubleshooting, you will cover topics such as Orchestration, Installation and Configuration, Storage and Volumes, Image Creation, Management, and RegistrySecurity and Networking.

Other Concepts of Container Mastery

There are also other concepts to cover in your docker platform, such as understanding triage issue reports from the stakeholders and resolving them. Knowledge of new Docker environments and performing general maintenance. Also, you will learn to migrate traditional applications to containers. This concept will help to migrateyour existing apps as Docker containerised apps. You can consult Ansible Training Ahmedabad to learn about the Docker certification.

These are the major concepts covered in Docker certification courses. To know more about the course, DCA exam, and concepts, get in touch with HighSkyIT Solution.

10 August 2023

how to create S3 bucket using Terraform

To use Terraform to construct an Amazon S3 bucket, you must define an appropriate resource block in your Terraform setup. Here’s a step-by-step tutorial on creating an S3 bucket with Terraform:

1 Configure AWS Credentials:
Before you continue, make sure you have your AWS credentials set up. You can use the AWS CLI aws configure command or specify them as environment variables.

2 Follow these steps to create a Terraform configuration:
Create a.tf file (for example, main.tf) to define your Terraform setup.

3 Define the S3 Bucket:
Add the following Terraform code to your main.tf file to define an S3 bucket resource:

terraform {
  required_providers {
    aws = {
        source = "hashicorp/aws"
        version = "5.8.0"
    }
  }
}

provider "aws" {
  region     = "ap-south-1"
  access_key = "Your_Access_Key"
  secret_key = "Your_Secrt_Key"

}

resource "aws_s3_bucket" "bucket" {
  bucket = "highskybucket"

  tags = {  
    Name        = "My bucket"
  }
}

Replace “highskybucket” with your S3 bucket’s unique name. Bucket names must be globally distinct throughout AWS.

4 Launch Terraform:
To launch Terraform, browse to the directory containing your Terraform configuration file and execute the following command:

 terraform init

5 Plan the Configuration:
In HashiCorp Terraform, the terraform plan command generates an execution plan outlining the modifications Terraform will make to your infrastructure based on your existing configuration. Without actually making the changes, it demonstrates to you what steps Terraform will take to create, update, or remove resources, for example. By doing so, you may examine and confirm the modifications before implementing them in your infrastructure.

terraform plan

6 Apply the Configuration:
Run the following command to create the AWS s3 Bucket:

terraform apply

7 Review and Confirm:
Terraform will display a plan of what it aims to build. After reviewing the plan, type yes to confirm and create the AWS s3 Bucket.

Output On AWS Infra

And Go To s3 Service 

 

How To Create IAM User and assign policy to user by Terraform

How to create ec2 instance using terraform

09 August 2023

How To Create IAM User and assign policy to user by Terraform

 To use Terraform to establish an AWS user, use the aws_iam_user resource given by the AWS provider. Here’s a step-by-step tutorial for creating an AWS user with Terraform.

1 Configure AWS Credentials:
Make sure you have your AWS credentials set up before you begin. You may either specify them as environment variables or use the AWS CLI aws configure command.

2 Create a Terraform configuration by following these steps:
To define your Terraform setup, create a.tf file (for example, main.tf).

3 Create an AWS User Resource:
To define the AWS user resource, add the following code to your main.tf file:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"     }
  }
}

provider "aws" {
  region     = "ap-south-1"
  access_key = "your_Access_Key"
  secret_key = "Your_Secret_Key"

}


resource "aws_iam_user" "example_user" {
  name = "nitin_user"
}


resource "aws_iam_access_key" "kye1" {
  user = aws_iam_user.example_user.id

}


output "secret_key" {
  value     = aws_iam_access_key.kye1.secret
  sensitive = true
}


output "access_key" {
  value = aws_iam_access_key.kye1.id

}


resource "aws_iam_policy_attachment" "test-attach" {
  name       = "test-attachment"
  users      = [aws_iam_user.example_user.name]
#   roles      = [aws_iam_role.role.name]
#   groups     = [aws_iam_group.group.name]
  policy_arn = "arn:aws:iam::aws:policy/ReadOnlyAccess"
}

4 Initialize Terraform:
To start Terraform, navigate to the directory containing your Terraform configuration file and run the following command:

 terraform init

5 Plan the Configuration:
In HashiCorp Terraform, the terraform plan command generates an execution plan outlining the modifications Terraform will make to your infrastructure based on your existing configuration. Without actually making the changes, it demonstrates to you what steps Terraform will take to create, update, or remove resources, for example. By doing so, you may examine and confirm the modifications before implementing them in your infrastructure.

terraform plan

6 Apply the Configuration:
Run the following command to create the AWS user:

terraform apply

7 Review and Confirm:
Terraform will display a plan of what it aims to build. After reviewing the plan, type yes to confirm and create the AWS user.

Output On AWS Infra

And Check Policy 

How to create ec2 instance using terraform

07 August 2023

How to create ec2 instance using terraform

What is Terraform

HashiCorp’s Terraform is an open-source infrastructure as code (IaC) tool. It enables you to use a declarative configuration language to define and manage your cloud infrastructure resources and services. You may use Terraform to automatically supply and manage a variety of infrastructure parts, including virtual machines, networks, storage, and more, across numerous cloud providers or on-premises environments.

1 Declarative Configuration: You identify the resources you require, their configurations, and relationships in a declarative configuration file that is often written in the HashiCorp Configuration Language, or HCL.

2 Provider Support: Numerous cloud service providers (such as AWS, Azure, Google Cloud, etc.) and other infrastructure elements (such as Docker, Kubernetes, etc.) are supported by Terraform. Terraform can be used to manage the resources and configurations that each supplier has to provide.

3 Versioning and Collaboration: Versioning and storing Terraform configurations in version control platforms like Git allows for team collaboration and preserves an audit trail of modifications.

4 Idempotency: Terraform operates under the idempotency principle, allowing you to apply the same configuration repeatedly without experiencing unintended consequences. To get the infrastructure to the desired state, Terraform will only perform the required adjustments.

5 Plan and Apply: When you modify your configuration file, Terraform can provide an execution plan that outlines the changes that will be performed to your infrastructure. After reviewing the strategy, you implemented it to bring about the desired improvements.

6 State Management: Your infrastructure’s current state is recorded by Terraform in a state file. This file aids Terraform in comprehending the configurations and resources that are currently deployed. It is crucial for updating and maintaining your infrastructure.

Compared to manual intervention, Terraform substantially simplifies the provisioning and management of infrastructure. It makes it possible to use infrastructure as code techniques, which facilitate the replication of environments, the management of modifications, and the maintenance of consistency throughout various stages of development and deployment.

Create ec2 instance

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.8.0"
}
}
}

provider "aws" {
region = "ap-south-1"
access_key = "Access_Key"
secret_key = "Secret_Key"

}

resource "aws_vpc" "my_vpc" {
cidr_block = "10.0.0.0/16"

tags = {
name = "MyVPC"
}

}

resource "aws_subnet" "public_subnet" {
vpc_id = aws_vpc.my_vpc.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = true
availability_zone = "ap-south-1a"

tags = {
Name = "public1"
}
}

resource "aws_subnet" "private_subnet" {
vpc_id = aws_vpc.my_vpc.id
cidr_block = "10.0.2.0/24"

availability_zone = "ap-south-1a"

tags = {
Name = "private1"
}

}

resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.my_vpc.id

}


resource "aws_route_table" "PublicRT" {
vpc_id = aws_vpc.my_vpc.id

route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.igw.id
}
}

resource "aws_route_table_association" "a" {
subnet_id = aws_subnet.public_subnet.id
route_table_id = aws_route_table.PublicRT.id
}

resource "aws_security_group" "my_nsg" {
name = "my_nsg"
description = "Allow all inbound traffic"
vpc_id = aws_vpc.my_vpc.id

ingress {
description = "ssh from VPC"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]

}

ingress {
description = "http"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = [ "0.0.0.0/0" ]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]

}
}

resource "aws_instance" "instance" {
ami = "ami-072ec8f4ea4a6f2cf"
vpc_security_group_ids = [aws_security_group.my_nsg.id]
instance_type = "t2.micro"
subnet_id = aws_subnet.public_subnet.id
key_name = aws_key_pair.sky_key.key_name
tags = {
Name = "highsky instace"
}

}


resource "aws_key_pair" "sky_key" {
  key_name = "sky_key"
  public_key = tls_private_key.rsa.public_key_openssh
}


resource "local_file" "tf_key" {
  content = tls_private_key.rsa.private_key_pem
  filename = "tfkey.pem"  
}

resource "tls_private_key" "rsa" {
  algorithm = "RSA"
  rsa_bits = 4096
  
}



output "publicip" {
  value = aws_instance.instance.public_ip
}

1 The HashiCorp Terraform infrastructure as code (IaC) tool uses the terraform init command to initialize new or existing Terraform configurations in directories. Terraform creates the environment, downloads provider plugins, and gets the directory ready for controlling your infrastructure when you run Terraform init.

terraform init

2 In HashiCorp Terraform, the terraform plan command generates an execution plan outlining the modifications Terraform will make to your infrastructure based on your existing configuration. Without actually making the changes, it demonstrates to you what steps Terraform will take to create, update, or remove resources, for example. By doing so, you may examine and confirm the modifications before implementing them in your infrastructure.

terraform plan

3 In HashiCorp Terraform, the changes specified in your configuration are applied to your infrastructure using the Terraform apply command. This command executes the operations required to create, update, or delete resources in accordance with your settings using the execution plan produced by Terraform plan.

terraform apply

yes

 Output On AWS Infra

 

Linux Certification Ahmedabad
19 July 2023

Continuing Education with Red Hat Staying Ahead in Open Source Technologies

In today’s rapidly evolving digital landscape, staying ahead in open-source technologies is essential for professionals seeking to excel in the field. With the vast popularity and significance of Linux administration and Red Hat technologies, it becomes crucial to equip oneself with the necessary skills and knowledge. If you are based in Ahmedabad, India, you’re in luck. A leading training provider offers top-notch Red Hat Training Course & Certification Ahmedabad designed to enhance your proficiency and open doors to exciting career opportunities.

Some features of Linux Administration with Online Classes in Ahmedabad

  • Comprehensive Curriculum:

This course provides a comprehensive curriculum that covers all aspects of managing and maintaining Linux-based systems. From basic concepts to advanced topics, you’ll gain a deep understanding of Linux architecture, command-line operations, user management, file systems, networking, security, and more. The curriculum is designed to equip you with the skills to handle real-world scenarios in Linux environments.

  • Flexibility and Convenience:

One of the primary advantages of online classes is the flexibility they offer. Whether you’re a working professional or a student, you can access the course materials and lectures at a time that suits you best. Companies like Highsky IT Solutions allow you to balance your learning with other commitments, making it convenient for individuals with busy schedules.

  • Interactive Learning Experience:

Engaging and interactive learning experiences are essential for effective comprehension and skill development. Through virtual labs, practical exercises, quizzes, and discussion forums, you’ll have hands-on opportunities to apply your knowledge, collaborate with peers, and seek guidance from experienced instructors.

  • Experienced Instructors:

To ensure a high-quality learning experience, Linux Administration Online Classes Ahmedabad are led by experienced instructors with extensive knowledge in the field. These instructors bring real-world expertise and industry insights to the virtual classroom, providing practical examples and guidance throughout the course.

  • Certification Opportunities:

Completing Linux Administration Online Classes may allow you to earn industry-recognized certifications. Choosing classes that align with recognized certification programs is essential to maximize the value of your learning journey.

Enhance Your Linux Expertise with RHCE, RHCSA, and Red Hat Training in Ahmedabad

In Ahmedabad, you can broaden your Linux administration skills through RHCE and RHCSA classes. These comprehensive programs offer a range of features to help you excel in Linux-based environments. RHCE RHCSA Classes in Ahmedabad provide in-depth knowledge and practical skills required to design, deploy, and manage Red Hat solutions effectively. Linux Training in Ahmedabad covers various topics such as system administration, network configuration, and security management. These institutions validate your expertise, enhancing your professional credibility. By enrolling in these programs, you can acquire valuable knowledge, hands-on experience, and potential career advancement opportunities in Linux administration.

Conclusion:

In a rapidly changing digital landscape, continuous education is vital for professionals seeking to stay ahead. Many offer a diverse range of online classes and training programs tailored to meet the demands of open-source technologies. By enrolling in Linux administration, Red Hat training, and certification courses, you can enhance your skill set and gain a competitive edge. Visit the highskyit.com website for more information and start your educational journey toward success.

05 July 2023

What Is OwnCloud & How To Install In Ubuntu 20.04

What Is OwnCloud?

Individuals and organizations can securely store, access, and share their files and documents using the self-hosted file synchronization and sharing platform known as OwnCloud. In contrast to cloud storage services like Dropbox, Google Drive, or OneDrive, it offers customers complete control over their data, serving as an alternative.

With OwnCloud, you may construct a private cloud storage solution on your own server or by using a hosting company. It offers capabilities like file synchronization between several devices, file sharing, group document editing, and data backup. The platform provides clients for many operating systems, such as Windows, macOS, Linux, Android, and iOS, enabling smooth file access from numerous devices.

One of OwnCloud’s main benefits is that you can store your data on your own servers or those of a reputable hosting company, guaranteeing that you will always have ownership and control over your files. Additionally, it offers options for encryption to increase security during file transfers and storage.

OwnCloud provides a number of extensions and plugins to enhance its functionality in addition to the essential file synchronization and sharing features. Task management, music streaming, calendar and contact synchronization, and interaction with other services like Microsoft Office Online or Collabora Online for group editing are a few of them.

Overall, OwnCloud offers a versatile and adaptable cloud storage option that enables people and organizations to manage their files, share information, and collaborate while still having complete control over their data.

How To Install OwnCloud In Ubuntu 20.04?

1 System Packages Update:-

Use the apt command below to update the system packages and repositories before you begin.

# apt update -y && apt upgrade -y

2 Install  Apache, MariaDB, And PHP Packages:-

How to install MariaDB and use MariaDB redhat

( 1 ) Apache:-  Apache Server is a free and open-source web server software that allows websites to be hosted on the Internet. An Apache server is a software program that is based on one computer and provides access to devices and websites on that computer to other computers on the Internet.

( 2 ) MariaDB:- Similar to MySQL, MariaDB is made to use tables, columns, and rows to store and manage structured data. It provides several programming interfaces and connectors for various computer languages, as well as SQL (Structured Query Language) for querying and modifying data.

( 3 ) PHP:- PHP is used to create OwnCloud, which is normally accessed through a web interface. To serve Owncloud files, as well as PHP  and other PHP modules required for OwnCloud to run efficiently, we will install the Apache webserver for this reason.

# apt install -y \
  apache2 libapache2-mod-php \
  mariadb-server openssl redis-server wget php-imagick \
  php-common php-curl php-gd php-gmp php-bcmath php-imap \
  php-intl php-json php-mbstring php-mysql php-ssh2 php-xml \
  php-zip php-apcu php-redis php-ldap php-phpseclib

3 By using the dpkg command after the installation is finished, you can check to see if Apache was installed:- 

# dpkg -l apache2

4 Run the commands to launch Apache and allow it to start automatically:-

( 1 ) Start:- Start Apache2 Service

# systemctl start apache2

( 2 ) Enable:- Use of this command  ” enable ”  automatically boot time. start Apache2 service

# systemctl enable apache2

( 3 ) Status:- Check service running

5 Check if PHP is installed. And version:-

# php -v

6 MariaDB Secure installation:-

MariaDB, just like MySQL is the default. secure Therefore, you must take another step and run the mysql_secure_installation script.

You are guided through a series of prompts by the Running command. You will need to create a root password first. The default root user unix socket authentication in MariaDB is insufficiently secure.

So, decline from using the Unix socket authentication by pressing  ” n ” and hitting

# mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we’ll need the current
password for the root user. If you’ve just installed MariaDB, and
you haven’t set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): [Press Enter]

OK, successfully used password, moving on…

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n]  [ Press Y ]

New password:                  [ redhat@123 ]
Re-enter new password:   [ redhat@123 ]
Password updated successfully!
Reloading privilege tables..
… Success!

By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] [ press Y ]

… Success!

Normally, root should only be allowed to connect from ‘localhost’. This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] [ Press Y ]

… Success!

By default, MariaDB comes with a database named ‘test’ that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] [ Press Y ]

– Dropping test database…
… Success!
– Removing privileges on test database…
… Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] [Press Y ]

… Success!

Cleaning up…

All done! If you’ve completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

7 Create MariaDB Database:-

To store files both during and after installation, we must build a database for Owncloud. Therefore, log into MariaDB.

mysql -u root -p

Enter password: redhat@123

MariaDB [(none)]> CREATE DATABASE highsky_db;
MariaDB [(none)]> GRANT ALL ON highsky_db.* TO 'harry'@'localhost' IDENTIFIED BY 'redhat@123';
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> EXIT

8 Download OwnCloud:-

wget https://download.owncloud.com/server/stable/owncloud-complete-latest.tar.bz2

9 Extract Directory:-

# tar -xjf owncloud-complete-latest.tar.bz2
# ls

10 Set Then permissions:-

# chown -R www-data:www-data owncloud
# chmod -R 755 owncloud

11 MV This Directory

mv owncloud /var/www/

12 Apache Configure for OwnCloud:-

We will set up Apache to serve OwnCloud’s files at this stage. To accomplish that, we will make the aforementioned Owncloud setup file.

# vim /etc/apache2/conf-available/owncloud.conf
Alias /owncloud "/var/www/owncloud/"

<Directory /var/www/owncloud/>
  Options +FollowSymlinks
  AllowOverride All

 <IfModule mod_dav.c>
  Dav off
 </IfModule>

 SetEnv HOME /var/www/owncloud
 SetEnv HTTP_HOME /var/www/owncloud

</Directory>

Save and close the file.

13 The next step is to run the commands listed below to activate all the necessary Apache modules and the newly added configuration:

# a2enconf owncloud

# a2enmod rewrite

# a2enmod headers

# a2enmod env

# a2enmod dir

# a2enmod mime

14 Restarting the Apache web server will make the modifications effective:-

systemctl restart apache2

15 Completing The Installation Of OwnCloud

The only step left to do is to install OwnCloud on a browser once all relevant configurations have been completed. Therefore, open your browser and enter the address of your server, followed by the

Username = admin
Password = admin

Database User = harry
Database Password = redhat@123
Database name = highsky_db

Username = admin

Password = admin

Successfully Install

 

22 June 2023

How To Configure API Gateway With AWS Lambda Function Integration

An API Gateway serves as a common entry point for APIs (Application Programming Interfaces), a service offered by cloud computing platforms like Amazon Web Services (AWS). It offers a managed option for safely and scalable developing, deploying, and managing APIs.

Clients can access and interact with the functionality and data offered by backend services by using API Gateway, which acts as a proxy between clients and those services. It serves as a gatekeeper or middleman that receives and processes API requests before sending them to the proper backend service.

API Gateway delivers the following crucial advantages and features:

( 1 ) Create and manage APIs with API Gateway. This includes specifying resources, methods (such as GET, POST, PUT, and DELETE), and the request/response structures that go with each. It offers a method for structuring and organizing your APIs, which makes them simpler to maintain.

( 2 ) Authentication, validation, transformation, and mapping are just a few of the actions that API Gateway can carry out on incoming requests. This gives you the chance to edit or tailor the requests before they go to the backend services, ensuring that they follow any security or format requirements.

( 3 ) Access control and security: API Gateway has built-in security mechanisms to safeguard your APIs and the exposed data. It supports a variety of authentication methods, including OAuth, API keys, AWS Cognito, and AWS Identity and Access Management (IAM) roles. By doing so, you can manage API access and user or client application authentication.

( 4 ) Scalability and performance: API Gateway is built to handle large numbers of API requests and can scale dynamically to address changing traffic loads. It offers caching solutions to enhance performance and lighten the burden on backend services. For further management and control of the usage of your APIs, it includes rate restriction and throttling.

( 5 ) Integration with Backend Services: API Gateway enables integration with a variety of backend services, including Amazon EC2 instances, AWS Lambda functions, and HTTP endpoints. This makes it possible for you to use already-existing services or create new ones to provide the functionality demanded by your APIs.

( 6 ) Monitoring and analytics: API Gateway gives you the logging and tracking tools you need to keep tabs on your APIs’ performance, failures, and usage. You can monitor and gather information about the usage and health of your APIs thanks to its integration with services like AWS CloudWatch.

You may streamline the creation, deployment, and management of APIs by using API Gateway, while also transferring many operational problems to the managed service. In addition to providing a scalable and secure gateway for API connection, it aids in isolating client applications from backend services.

Lambda function

1. Navigate to the Lambda dashboard.

2. Click on the “Create function” button.

3. Choose the type of function you want to create. You can create a function, blueprint, or serverless application repository from scratch.

4. Give your function a name and description.

5. Choose a runtime for your function, such as Python, Node.js, or Java.

( A runtime is a version of a programming language or framework that you can use to write Lambda functions. Lambda supports runtime versions for Node.js, Python, Ruby, Go, Java, C# (.NET Core), and PowerShell (.NET Core)

To use other languages in Lambda, you can create your own runtime.

Note that the console code editor supports only Node.js, Python, and Ruby. If you choose a compiled language, such as Java or C#, you edit and compile your code in your preferred SDE and upload a deployment package to the function. )

Taking by Python 3.1 

6. Configure the function’s execution role, which determines the permissions that the function has to access AWS

Change default execution role
Execution role
Choose a role that defines the permissions of your function. To create a custom role, go to the IAM console
Create a new role with basic Lambda permissions

Click = Create function

Successfully created the function = highsky-function.

API Gateway 

1 Open the API Gateway service: Once logged in, look for “API Gateway” in the “Networking & Content Delivery” section or in the search box of the AWS Management Console.

2 Click on “Create API”: To begin building a new API, use the “Create API” option from the API Gateway service dashboard.

3 Choose the API type: Choose either “REST API” or “WebSocket API” depending on the type of API you want to build. While WebSocket APIs allow for bidirectional communication through the WebSocket protocol, REST APIs are frequently utilised for HTTP-based communication.

4 Select a protocol: Choose whether HTTP or HTTPs is the protocol you wish to use if you decide to develop a REST API. While HTTP is suitable for testing and development, HTTPS is advised for use in operational settings.

Click = Bulid

Click = Ok 

5 Choose a name for your API: 

Click = New API 

Choose a name for your API: Give your API a name that clarifies its function.

Choose an endpoint type:

Click = Create API 

API name* = highsky-API

Description = API-highsky

Endpoint Type = Regional

Click = Create API 

Configure the API: Create the API configuration by specifying the resources, methods, and integrations. To add a method to a resource (such as GET, POST, or PUT), click “Create Method”.

Click = Actions 
Click = Create Method 

Click  = Save 

Click = Lambda highsky-function

Test = function

Go to API Gateway  

Click = Actions and Deploy API

Click = Deploy 

Click the = Invoke URL

Successfully

 

WhatsApp chat