Published May 27, 2018 by

Grant Access To Only One S3 Bucket to AWS IAM User

This tutorial gives an overview of how to restrict an AWS IAM user’s access to a single S3 bucket.

Create a policy for AWS bucket

Go to IAM >> Policies >> Create Policy >>  JSON >>



Inset below policy here.

 {  
   "Version": "2012-10-17",  
   "Statement": [  
     {  
       "Effect": "Allow",  
       "Action": [  
             "s3:GetBucketLocation",  
             "s3:ListAllMyBuckets"  
            ],  
       "Resource": "arn:aws:s3:::*"  
     },  
     {  
       "Effect": "Allow",  
       "Action": "s3:*",  
       "Resource": [  
         "arn:aws:s3:::YOUR-BUCKET",  
         "arn:aws:s3:::YOUR-BUCKET/*"  
       ]  
     }  
   ]  
 }  

Now click on 'Review Policy' 
  

Here it asks for policy name and description. after insert detailed click on 'Create Policy' button.

Now AWS created a new policy for only single S3-bucket access which named you to insert in policy.



Attach Policy to User and Group

Now attach this policy which user and group, you want to access only that bucket which mentioned you in policy and restricts remain bucket access.


Read More
Published May 25, 2018 by

Block Traffic From a Single IP in AWS

Some days ago, my one client complained me, our server being hammered by traffic from some particular IP.

It was causing a 20x increase in traffic to some URLs. So obviously I wanted to block all traffic from that single IP.

So here is a quick tutorial for doing this.

Open VPC dashboard
Open the “Network ACLs” view


Open the ACL editor

1. Select the subnet to which your EC2 instances or load balancers are connected.
2. Click "Inbound Rules"
3. Click "Edit"

Add a rule to block the traffic/IP

You will now see the ACL editor. On the last row, you can add a new rule.


Here is how you should fill out the fields:

#Rule
Use any number less than 100, which is the number of the default accept-all rule. This is important because rules are evaluated in order, and your rule needs to come before the default.

#Type
Select “All traffic” or Particular Protocol which you want to Block

#Source
The CIDR you want to block. To match a single IP address, enter it here and append /32. For example, I blocked 22.87.45.187/32

#Allow/Deny
Select “DENY”

Now click Save and you should see the updated rules table.
Read More
Published May 18, 2018 by

AWS EC2- Elastic Load Balancer

Elastic Load Balancer (ELB) automatically distributes incoming request traffic across multiple Amazon EC2 instances and results in achieving higher fault tolerance. It detects unfit instances and automatically reroutes traffic to fit instances until the unfit instances have been restored in a round-robin manner. However, if we need more complex routing algorithms, then choose other services like Amazon Route53.

ELB consists of the following three components.

Load Balancer
This includes monitoring and handling the requests incoming through the Internet/intranet and distributes them to EC2 instances registered with it.

Control Service
This includes automatically scaling of handling capacity in response to incoming traffic by adding and removing load balancers as required. It also performs fitness check of instances.

SSL Termination
ELB provides SSL termination that saves precious CPU cycles, encoding and decoding SSL within your EC2 instances attached to the ELB. An X.509 certificate is required to be configured within the ELB. This SSL connection in the EC2 instance is optional, we can also terminate it.

Features of ELB

  • ELB is designed to handle unlimited requests per second with gradually increasing load pattern.
  • We can configure EC2 instances and load balancers to accept traffic.
  • We can add/remove load balancers as per requirement without affecting the overall flow of information.
  • It is not designed to handle a sudden increase in requests for online exams, online trading, etc.
  • Customers can enable Elastic Load Balancing within a single Availability Zone or across multiple zones for even more consistent application performance.

Create A Load Balancer

Step 1 − Go to Amazon EC2 console.

Step 2 − Select your load balancer region from the region menu on the right side.

Step 3 − Select Load Balancers from the navigation pane and choose to Create Load Balancer option. Choose classic load balancer. A pop-up window will open and we need to provide the required details.



Step 4 − In load Balancer name box: Enter name of your load balancer.
In creating LB inside box: Select the same network which you have selected
for instances.
Also Select Enable advanced VPC configuration if selected default VPC.

Step 5 − Click the Add button and a new pop-up will appear to select subnets from the list of available subnets as shown in the following screenshot. Select only one subnet per availability zone. This window will not appear if we do not select Enable advanced VPC configuration.

Step 6 − Click the Next: Assign Security Groups button and create a new security group for the load balancer.


Step 7 − A new pop-up will open having health checkup configuration details with default values. Values can be set on our own, however, these are optional. Click on Next: Add EC2 Instances.


Step 8 − A pop-up window will open having information about instances like registered instances, add instances to load balancers by selecting ADD EC2 Instance option and fill the information required. Click Add Tags.

Step 9 − Adding tags to your load balancer is optional. To add tags click the Add Tags Page and fill the details such as key, value to the tag. Then choose to Create Tag option. Click Review and Create button.

A review page opens on which we can verify the setting. We can even change the settings by choosing the edit link.

Step 10Click Create to create your load balancer and then click the Close button.



Delete a Load Balancer

Step 1 − Go to Amazon EC2 console.

Step 2 − Choose Load Balancers option from the navigation pane.

Step 3 − Select Load balancer and click the Action button.


Step 4 − Click the Delete button. An alert window will appear, click the Yes, Delete button.

Read More
Published May 17, 2018 by

AWS Basic Architecture

This is the basic structure of AWS EC2, where EC2 stands for Elastic Compute Cloud. EC2 allows users to use virtual machines of different configurations as per their requirement. It allows various configuration options, mapping of an individual server, various pricing options, etc. We will discuss these in detail in AWS Products section. Following is the diagrammatic representation of the architecture.


S3 allows the users to store and retrieve various types of data using API calls. It doesn’t contain any computing element.


Load Balancing

Load balancing simply means to hardware or software load over web servers, that improver's the efficiency of the server as well as the application. Following is the diagrammatic representation of AWS architecture with load balancing. Hardware load balancer is a very common network appliance used in traditional web application architectures.AWS provides the Elastic Load Balancing service, it distributes the traffic to EC2 instances across multiple available sources and dynamic addition and removal of AWS EC2 host from the load-balancing rotation. Elastic Load Balancing can dynamically grow and shrink the load-balancing capacity to adjust to traffic demands and also support sticky sessions to address more advanced routing needs.

Elastic Load Balancer

It is used to spread the traffic to web servers, which improves performance. AWS provides the Elastic Load Balancing service, in which traffic is distributed to EC2 instances over multiple availability zones, and dynamic addition and removal of AWS EC2 host from the load-balancing rotation. Elastic Load Balancing can dynamically grow and shrink the load-balancing capacity as per the traffic conditions.

Auto Scaling

The difference between AWS cloud architecture and the traditional hosting model is that AWS can dynamically scale the web application fleet on demand to handle changes in traffic.


In the traditional hosting model, traffic forecasting models are generally used to provision hosts ahead of projected traffic. In AWS, instances can be provisioned on the fly according to a set of triggers for scaling the fleet out and back in. AWS Auto Scaling can create capacity groups of servers that can grow or shrink on demand.

Cloud-Front

It is responsible for content delivery, i.e. used to deliver the website. It may contain dynamic, static, and streaming content using a global network of edge locations. Requests for content at the user's end are automatically routed to the nearest edge location, which improves the performance. AWS Cloud-front is optimized to work with other AWS, like AWS S3 and AWS EC2. It also works fine with any non-AWS origin server and stores the original files in a similar manner. In AWS, there are no contracts or monthly commitments. We pay only for as much or as little content as we deliver through the service.

Security Management

AWS's Elastic Compute Cloud (EC2) provides a feature called security groups, which is similar to an inbound network firewall, in which we have to specify the protocols, ports, and source IP ranges that are allowed to reach your EC2 instances. Each EC2 instance can be assigned one or more security groups, each of which routes the appropriate traffic to each instance. Security groups can be configured using specific subnets or IP addresses which limits access to EC2 instances.

Elastic Caches


AWS Elastic Cache is a web service that manages the memory cache in the cloud. In memory management, A cache has a very important role and helps to reduce the load on the services, improves the performance and scalability on the database tier by caching frequently used information.

RDS


AWS RDS (Relational Database Service) provides a similar access to that of MySQL, Oracle, or Microsoft SQL Server database engine. The same queries, applications, and tools can be used with AWS RDS. It automatically patches the database software and manages backups as per the user’s instruction. It also supports point-in-time recovery. There are no up-front investments required, and we pay only for the resources we use.

RDMS on EC2


AWS RDS allows users to install RDBMS (Relational Database Management System) of
your choice like MySQL, Oracle, SQL Server, DB2, etc. on an EC2 instance and can manage as required. AWS EC2 uses AWS EBS (Elastic Block Storage) similar to network-attached storage. All data and logs running on EC2 instances should be placed on AWS EBS volumes, which will be available even if the database host fails. AWS EBS volumes automatically provide redundancy within the availability zone, which increases the availability of simple disks. Further, if the volume is not sufficient for our databases needs, a volume can be added to increase the performance of our database.


Using AWS RDS, the service provider manages the storage and we only focus on managing the data.

Storage and Backups


AWS cloud provides various options for storing, accessing, and backing up web application data and assets. The AWS S3 (Simple Storage Service) provides a simple web-services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web.

AWS S3 stores data as objects within resources called buckets. The user can store as many objects as per requirement within the bucket and can read, write and delete objects from the bucket.

AWS EBS is effective for data that needs to be accessed as block storage and requires persistence beyond the life of the running instance, such as database partitions and application logs.

AWS EBS volumes can be maximized up to 1 TB, and these volumes can be striped for larger volumes and increased performance. Provisioned IOPS volumes are designed to meet the needs of database workloads that are sensitive to storage performance and consistency.

AWS EBS currently supports up to 1,000 IOPS per volume. We can stripe multiple volumes together to deliver thousands of IOPS per instance to an application.

No Physical Network Devices Need

In AWS, network devices like firewalls, routers, and load-balancers for AWS applications no longer reside on physical devices and are replaced with software solutions.


Multiple options are available to ensure quality software solutions. For load balancing choose Zeus, HAProxy, Nginx, Pound, etc. For establishing a VPN connection choose OpenVPN, OpenSwan, Vyatta, etc.

No Security Concerns

AWS provides a more secure model, in which every host is locked down. In AWS EC2, security groups are designed for each type of host in the architecture, and a large variety of simple and tiered security models can be created to enable minimum access among hosts within your architecture as per requirement.

Data Centers

EC2 instances are easily available at most of the availability zones in AWS region and provide a model for deploying your application across data centers for both high availability and reliability.
Read More
Published May 15, 2018 by

How to Transfer PuTTY Sessions To Another Windows System

PuTTY is a terminal emulator application which can act as a client for the SSH computing protocols. You can use putty for remote login on your system.

By default PuTTY stores the session information in the registry on Windows machine. If you have several PuTTY sessions stored in one system and would like to transfer those sessions to another system, you need to transfer HKEY_CURRENT_USER\Software\SimonTatham registry key and value as explained below:

Export the Session:

Click on Start -> Run ->  

And enter the following regedit command in the run dialog box, which will place the PuTTY registry key and value on your desktop in the putty-registry.reg file.

regedit /e "%userprofile%\desktop\putty-registry.reg" HKEY_CURRENT_USER\Software\Simontatham


Import the Session:

Transfer the putty-registry.reg to the destination Windows machine. Right click on the .reg file and select Merge as shown below. This will display a confirmation message: Are you sure you want to add the information in putty-registry.reg to registry?. Click on 'Yes' to accept this message.


Launch the putty to verify the new sessions are transferred successfully. The registry key merge will not delete the previous PuTTY sessions. Instead, it will merge the entries to the existing PuTTY sessions on the destination windows machine.
Read More
Published May 10, 2018 by

How to Upload a File to Google Drive from the Terminal/Command Line

We are doing several Linux projects regularly, and we will need to be sure we are backing them up. I wanted to quickly back up a copy of my files and so I went looking for an easy way to upload a file to Google Drive, and I found it with gdrive.

Here is the tutorial on how to upload a file to Google Drive from the command line or terminal.


1. Go to the root directory and download gdrive.
 # cd ~  
 # wget https://docs.google.com/uc?id=0B3X9GlR6EmbnWksyTEtCM0VfaFE&export=download  

2. You should see a file in your directory called something like uc?id=0B3X9GlR6EmbnWksyTEtCM0VfaFE. and rename this file to gdrive.
 # mv uc?id=0B3X9GlR6EmbnWksyTEtCM0VfaFE gdrive  

3. Give execute permission to gdrive.

 # chmod +x gdrive  

4. Install the file to /usr folder.

 # sduo install gdrive /usr/local/bin/gdrive  

5. Now we will need to give access to Google Drive to allow this program to connect to your account. To do this, insert below command.

 # gdrive list  

6. Copy the link it gives you to your browser and chooses your google drive account.


7. Click Allow button to give access.


8. Copy the generated verification code and insert into a terminal.


9. Now we are done... Let's upload a file.

 # gdrive upload /file_path  

For example,

 # gdrive upload /opt/subhash.txt  

10. Go to google drive and check it's uploaded.
Read More