USING IAM USERS TO ACCESS ADDS

INTRODUCTION

In this lab, we are going to Link our IAM users to access our CloudFormation and EC2 services to manage our stack which we created after editing the ADDS stack. This will give access to the resources for this user who can manage stack and edit it. It should be carefully configured because if we add any other service other than what the IAM user is privileged to then the data can be hacked. Accessing AWS Services from IAM users will be very beneficial because in a large organization there are different branches who need different services to access which can be done by creating IAM users.

To create a user we need to go to IAM Service and select Users from the left navigational menu.

ADDS-4

STEP 1: We need the name of our user, and select what type of access should be given to this user. We can give access from our Access ID and Secret Access key from our main Security Credentials. We can set default password which is requested to change after login.

ADDS-4-1

STEP 2: We have to attach few policies in order to give access to our user. Be careful what policies we give to users this is the point where we are giving control to AWS. Select Attach Existing Policies Directly which are pre-created by AWS.ADDS-4-2

STEP 3: For accessing ADDS we need to set AWSCloudFormationReadOnlyAccess, AmazonEC2FullAccess, and AmazonVPCFullAccess. These policies will give access only to those resources in AWS.

ADDS-4-3

STEP 4: In our IAM Dashboard we will have a Sign-in Link which will direct us to the Login page, Login in with the user details and after that, we need to set the password.

ADDS-4-4

STEP 5: if we try accessing any other service, we will get Error: Access Denied.

ADDS-4-5

STEP 6: If we go and see EC2 Service, there will be instances displaced and can be controlled exactly same as of admin. ADDS-4-6ADDS-4-7

CONCLUSION

In this lab, we have attached IAM Aspects to our ADDS Stack that we created in the different region. The user which we created will get access only to the resources which we have in our ADDS stack.

 

Advertisements

ADDS BUDGETTING

INTRODUCTION

In this blog, we are going to analyze the costs which we can get charged even when we are using Free tier. This will give us the idea of how the costs are different in different regions. Even when we are using the same services the costing will be different for different regions.

I have used two regions to do my ADDS Project:

  1. Ohio (US-EAST-2)
  2. Virginia (US-EAST-1)

The Costs in these regions vary accordingly if we can see in the following figures the same EC2 instances costs vary. And even we have extra features used in Ohio than Virginia.

2017-09-06.png

The cost in Virginia will be very less since we just deployed only VPCs, subnets and route tables.

2017-09-06 (1).png

At the beginning of this project I had $88 in my credit (Free Tier), but since they had changed the policy for the Free tier, $100 credit was allowed for us only till 3 months.

We can see all the changes Under Bills which we can see in My Account settings.

DEVELOPING THE TEMPLATE TO CREATE THE PREVIOUS ADDS SYSTEM.

INTRODUCTION

In this lab, we are going to edit the Template which got generated using CloudFormer. The code which we get is in JSON format which is easy to understand.

WHY WE ARE EDITING?

We are now deploying the ADDS infrastructure in the different region because we can address disaster recovery and region migration scenarios by running such a template with minor modifications in another region and making it simple. We cannot deploy the same template in the same region we need to change the region because it’s the same ADDS infrastructure and all its parameters will be same. We need to make it simple so that we can add any new elements to the infrastructure easily.

Now editing the infrastructure will be fun because it will make us learn each and every module in our infrastructure. Since we are moving all the resources of our ADDS infrastructure to the different region we need to create each and every aspect. This cannot be done being an IT expert, we need to learn JSON coding perfectly to do that. But for now, we can build everything from our template except Route functionality. So, let’s begin it will take around 2 hours to completely edit the code.

STEP 1: We need to access CloudFormation Service from a different region other than the region where we have our ADDS Stack. Since I have stored the template in S3 bucket I will access the template from there, if you have copied in your local system we need to upload that code and click on View/Edit template in Designer.

For my convenience, I have kept the original template on the left and the template which we are editing on the right. Both are in the different region, so first we need to change the Availability Zones where ever necessary. To find things easily click on code and press Control+F and type the item you are looking for and edit that item accordingly.

2017-09-02 (36).png

STEP 2: Now edit the subnet name, we need to only edit the title of our VPC, subnets, Route Tables, Elastic IPs etc. This is because if we keep them unchanged these parameters will not create in a different region. The properties will be same but the naming should be different in order to make it unique from the original name. These all parameters are not necessary to change but we need to make them uniquely named to differ from regions.

2017-09-02 (35)

Make sure to change the region where ever necessary or else the code will not work due to region differentiation.

2017-09-02 (36)

2017-09-02 (37)

2017-09-02 (38)

Change Internet Gateway IP to any unique name.

2017-09-02 (39)

DHCP options name can be changed. It’s your wish to do it, not necessary.

2017-09-02 (40)

2017-09-02 (41)

2017-09-02 (42)

2017-09-02 (43)

2017-09-02 (44)

2017-09-02 (45)

2017-09-02 (46)

STEP 3: After changing the naming of VPC, Subnets, Routing Table, etc. We must change the settings under Launch Configuration like the image ID, instance type, key name. These are unique for the different region.

2017-09-02 (47)

To find the Image ID, we need to go EC2 service of the region where we are deploying our template and when we click on Launch Instance we will get a lot of Operating System with their AMIs so we need to select the type of Operating System we are using and copy the AMI number related to that.

2017-09-02 (48)

STEP 4: We also need to create a key pair in the region where we are deploying our template. This can be done under EC2 service, it’s simple.

2017-09-02 (50)

These are the major parameters which we need to edit in our template. Make sure you edit the entire code with new names. When we compare the architecture of our templates in these two regions it will be same.2017-09-02 (52).png

STEP 5:  If we see a tick on the top left this will validate the code and check if any errors are there in the code. We shouldn’t get any errors, after that if we see a Cloud Symbol on the top left, this will create the stack based on this code. We need to name the Stack and click on Create Stack which will take some time and will say ROLLBACK_IN_PROGRESS and then ROLLBACK_COMPLETE. This shows that it has failed to execute.

Now we actually need to add few parameters like NATGateway to our Route functionality. The routing functionality will help us to connect to our Public IP in order to connect to our ADDS infrastructure. But we are not perfect in JSON coding, we will remove this functionality and create the basic infrastructure which includes every other parameter.

STEP 6: Now we need to remove the Route1, Route 2, Route 3, Route 4, Route 5 lines in our coding and add some brackets after removing to avoid errors in coding. Now if you execute the code it will create the stack. We will have a Route related to our Internet Gateway (igw1 in my case) we can keep that even though it doesn’t create any functionality.

2017-09-02 (53)

We can see there will be new EC2 instance created in our region where we deployed this template.

2017-09-02 (54)

If we compare the architecture, the left is our Original Architecture and the right one is our Edited architecture.

2017-09-02 (55)

This is our new architecture which we created using our CloudFormer Template.

template1-designer (5)

As we can see the VPC is created and even in our instances there will be New instance created which doesn’t include Active Directory components like our domain which we created for ADDS stack before. This is because we need to create another stack (ADStack) which contains the DHCPOptions and Microsoft Active directory components. This is a default template which we can create it in any region and define the parameters like Domain, IP addresses etc.

CONCLUSION

In this lab, we have learned how to edit JSON coding and use the edited code in our new region to create the ADDS infrastructure in a simple way. We also compared it with our old infrastructure and noticed the changes we made. Finally, we can say that replication of Active Directory components from one region to another is not possible using CloudFormer templates, we just need to deploy the new ADDS stack in another region in order to deploy two ADDS for any organization.

BUDGETS

We are just transferring the previous infrastructure to a different region so there will be no cost for this migration. We will be deploying new EC2 instance in that region which will cost us according to that region. Let’s see the pricing of EC2 instances in both the regions.

BUDGET FOR CLOUDFORMER INSTANCEBUDGET FOR CLOUDFORMER INSTANCE-2

THANK YOU.

RUNNING CLOUD FORMER TO CREATE A TEMPLATE FROM OUR EXISTING ADDS INFRASTRUCTURE

INTRODUCTION

In this lab, we are going to create a template using CloudFormer. CloudFormer is a service used in AWS to create an infrastructure based on our existing infrastructure. In this case, we are going to create a template from our ADDS Infrastructure, we are creating this template in order to deploy our ADDS infrastructure in the different region. This will help us to build a redundant infrastructure if we are using for big companies.

STEP 1: We need to create a stack in the same region of our ADDS infrastructure. Name it accordingly and we also need to create a username, password to access CloudFormer. We can again use the default VPC or Create new VPC. It’s better to create new VPC because using default will make things complex in big organizations.

2017-09-02 (16)

2017-09-02 (17)

2017-09-02 (18)

We need to acknowledge that the services that are created will create IAM resources. Because the username and password we are using should be stored in our IAM in order to use them in CloudFormation.

2017-09-02 (19)

STEP 2: After the template is created we will get a new instance in our instances which will be t2.small which will charge us. To access CloudFormer we need to click on the link which is Public DNS of our t2.small Instance. This will direct us to CloudFormer.

When ever you stop the instance and use it again we need to go to our instance and copy the new public DNS to access CloudFormer.

2017-09-02 (20).png

We need to provide the username and password which we created for CloudFormer.

2017-09-02 (21).png

STEP 3: We need to select some parameters in order to create the new template. Since we are creating for ADDS we need to select everything related to our ADDS Stack. So First, we need to select VPC which will be selected by default or you can find all the details in your Stack ( CloudFormation). Before this step, we can also assign DNS to our New Template and also we can add some description to make it unique.

2017-09-02 (22)

Now we need to select all the four subnets which are public and private subnets pre-created during ADDS deployment. And the Internet Gateways in our case we have only one.

2017-09-02 (23)

This is where we can find all the details of our ADDS Stack, we need to create the template using these details to get ADDS infrastructure.

2017-09-02 (24)

Similarly, do the following changes in your CloudFormer Template.

2017-09-02 (25)

2017-09-02 (26)

2017-09-02 (27)

2017-09-02 (28)

2017-09-02 (30)

STEP 4: We can modify the logical name of our VPC’s, subnets, Internet gateways, etc in here. Click on Modify in the respective tag.

2017-09-02 (31)

STEP 5: We will get the code of our template which will be around 1000 lines. Make sure to copy the coding or save it in Amazon S3 bucket. This code will be used to launch new stack in different regions.

2017-09-02 (32)

CONCLUSION

In this lab, we have learned how to create CloudFormer stack using CloudFormation Service. Also, we have created a new template for our ADDS Infrastructure using CloudFormer.

BUDGET

For this lab, we are using CloudFormation Service to create CloudFormer Stack which can be free when using free tier. But when we create CloudFormer stack there will be an instance created to access this stack. T2.Small type instance will be created by default which will cost according to the region.

BUDGET FOR CLOUDFORMER INSTANCE.PNG

THANK YOU.

MIGRATING THE DATABASE FROM MySQL TO AURORA

INTRODUCTION

In this lab, we are going to create a database in Aurora platform and we will transfer the existing data in MySQL to this database. This type of database migration will help to create replications of same data on different platforms. We are using Database Migration Service and Relational Database Service (RDS) provided by AWS. Aurora is basically the next stage in the evolution of Amazon’s hosted services. If now we’re used to using EC2 for our servers, and RDS was introduced for optimized database instances, Aurora is the next step in this offering. Fully compatible with any operation you’d run through MySQL so no code changes are needed.

To create the migration first we need to create Aurora RDS Instance which is done by RDS Service.

STEP 1: Click on Launch Aurora (MySQL) in RDS Service this will give us the default settings configuration page.

Lab 11 (19)

STEP 2: We need to specify all the details, and set credentials to access this RDS instance. To access this instance we need to use MySQL workbench software in our local system.

Lab 11 (1)

STEP 3:  Here we need to define the VPC to which we need to connect, make sure all the settings in here should be the same as our MySQL instance to make the connection. They both should be in the same VPC.

Lab 11 (2)

STEP 4: We need to Turn off the monitoring and maintenance because these will not be used for our lab. And click on “Launch DB Instance”

Lab 11 (3)

STEP 5: IF you go to MySQL Workbench we can see a plus option in MySQL connections tab, this is where we can create new connections. Now, we need to make a connection to our Aurora RDS Instance. The connection details are as follows:

Name: Aurora Database

Hostname: Endstring of our RDS Instance

Username and Password: This is where we use the credentials that we created when we were implementing our Aurora Instance.

After that, we can Test the connection and there will be a new connection created in MySQL workbench for our Aurora Database.

Lab 11 (4)

STEP 6: We need to create the default schemas for our Aurora Database in order to store the replications files in here. This can be done by using our Database code which we got for our DinoStore labs.

Lab 11 (5)

STEP 7: We can explore the Schemas, they have all the tables but they will be empty we need to replicate our existing Database in here which we will do now. To do Database Migration we need to go to DMS (Database Migration Service) in AWS and create a new Replication groups.

Lab 11 (9)

STEP 8: We need to fill the replication instance details in the first, and the instance should be accessed publically.

Lab 11 (10)

STEP 9: Now, we need to define our RDS instances which we are using in Database Migration Service. We can get all the details in RDS service where we have our databases.

Lab 11 (11)

STEP 10: We need to change the rules in our Aurora Instance in RDS in order to access that instance in our MySQL workbench. RDS RULE SOURCE SHOULD BE FROM ANYWHERE, it is not recommended in real life scenario but for our labs, it is easier than setting MY IP where we need to select whenever we change the network.

Lab 11 (12)

Lab 11 (13)

STEP 11:  After we fill all the details we need to click on “Run Test” which will take 2-3 minutes to verify the connection and we need to get Connection tested Successfully.

Lab 11 (14)

STEP 12: After we complete adding our instances we need to create task for migrating the database, and in Task settings, we need to Migrate Existing data from MySQL to Aurora we can also do replication of database which will be more useful but since in our project we have completed our database we will try Migrating existing data.

Lab11

STEP 13: In task settings, we need to select where to drop the tables if any failure comes so we will select Drop tables on target.

Lab 11 (16)

STEP 14: LOB means when we have large data to be replicated from MySQL to Aurora we need to set up the few parameters. Since we are not doing any replication keep everything to default. And we need to select what part of our database should be copied in Scheme and we can select what to do with that Schema name.

Lab 11 (17)

STEP 15: After that, the task will be created and the status will change to Ready and we need to click “Start/Resume” which will start the replication.

Lab 11 (18)

STEP 16: The status will now change to Running, make sure RDS Instances are on and they are properly connected.

Lab 11 (21)

STEP 17: We can see the database in our MySQL database which has few orderitems, orders and product list.

This slideshow requires JavaScript.

STEP 18: As the time progress we will get all the data copied from MySQL to our Aurora Database.

Lab 11 (22)

Lab 11 (23)

And we can see in our AWS Database Migration Service (DMS) the task which we created will show 100% complete and below we can see the number of tables copied and it will show for each module in our dinostoredb scheme.

Lab 11 (24)

CONCLUSION

In this lab, we are copying all the data from MySQL database to Aurora database. This will help to migrate all the data without any changes in our MySQL coding and we are choosing Aurora because:

  1. Faster recovery from instance failure (5x times or more vs. MySQL)
  2. Consistent lower impact on the Primary replica
  3. Needs additional throughput (theoretically 5x times for the same resources vs. MySQL). This was achieved by decoupling the cache and storage sub-systems and spreading them across many nodes as well as performing log commit first while DB manipulation is done asynchronously.
  4. Uses or can migrate to MySQL 5.6

BUDGETS

In this lab, we are using Database Migration Service (DMS) which will cost us even when we are using Free Tier so make sure to set the budget for this service and monitor it regularly if you are learning. Even we get charged for using Aurora Platform for the database.

We can find the pricing of DMS instances and storages in this link: https://aws.amazon.com/dms/pricing/

The below figure will show my cost that AWS has charged me for using DMS Service.

2017-09-06 (2).png

2017-09-06 (3).png

MANUALLY DEPLOYING A MICROSOFT ACTIVE DIRECTORY SERVICE USING AWS DIRECTORY SERVICE.

INTRODUCTION

In this lab, we are going to create a stack using Active Directory Services which is provided by Amazon Cloud. We will use CloudFormation service by AWS, to create active directory infrastructure. There are three scenarios that are provided by AWS,

Scenario 1: Deploy and manage your own AD DS installation on the AWS
The AWS CloudFormation template for this scenario builds the AWS Cloud
infrastructure, and sets up and configures AD DS and AD-integrated DNS on the AWS
Cloud. It doesn’t include AWS Directory Service, so we need to handle all AD DS maintenance and monitoring tasks ourselves.

Scenario 2:  Extend your on-premises AD DS to the AWS Cloud.

The AWS CloudFormation template for this scenario builds the base AWS Cloud infrastructure for AD DS, and you perform several manual steps to extend your existing network to AWS and to promote your domain controllers. We need to handle all AD DS maintenance and monitoring tasks ourselves.

Scenario 3: Deploy AD DS with AWS Directory Service on the AWS Cloud.
The AWS CloudFormation template for this scenario builds the base AWS Cloud
infrastructure, and deploys AWS Directory Service for Microsoft AD, which offers
managed AD DS functionality on the AWS Cloud. AWS Directory Service takes care of
AD DS tasks such as building a highly available directory topology, monitoring domain
controllers, and configuring backups and snapshots.

STEP 1: We are going to deploy using Active Directory Services, we need to go AWS Architecture to see all the templates which are provided by AWS. In that, we will select Active Directory DS under Microsoft workloads.

ADDS (24)

STEP 2: We will find the architecture which AWS is going to provide us and what all services will be included when you are deploying that architecture. The default infrastructure will be Scenario 1.

ADDS (25)

STEP 3: For different Scenario’s we need to scroll down and select Deployment details and click on “Launch the Quick Start”

ADDS (23)

STEP 4: In here we can find all the detailed information of each scenario, we need to click on ”Launch Quick Start” under Scenario 3. We can use our pre-created VPC or else we can create a new VPC.

ADDS (22)

STEP 5: This will be the template which is provided by AWS, we will have total four stacks created- VPCStack, ADStack, RDGWStack and the Main Stack.

VPCStack: this contains all the information of our new VPC which is created and also the details of our subnets, internet gateways, vpcid etc.

ADStack: here we will have all the details of our domain, login admin etc.

RDGWStack: We will have details of our instance, which is going to create for our ADDS. We can use the only instance to access our Active Directory infrastructure.

ADDS (26)

STEP 6: We need to configure all the settings before we implement this infrastructure. Here we need to set the domain, VPC IP address, Subnet IP address, Subnets, username, password. We need the username and password to access our active directory instance. Select the type of instance we require and also key pair to access that instance. We can use our existing Key Pair or create a new one.

2017-09-02 (1)

2017-09-02

2017-09-02 (2)

STEP 7: We can add tags or IAM roles to the next page and then confirm the order and click on “Create” This will take around 30 minutes to setup everything. And we need to get 4 stacks created and status should be “CREATE_COMPLETE”

2017-09-02 (3).png

STEP 8: Now we will try to access our instance and see whether everything is working or not. We need to go to Instances and select the RGDW instance then Click Connect. We need to download the Remote Desktop File, to access our Active directory services we need to type the username and password which we set for our Active Directory. To find that we need to click on ADStack under that we can see the output and in DomainAdmin tag we will find our username. The password should be with you.

2017-09-02 (4).png

BEFORE ACCESSING THE REMOTE DESKTOP FILE, WE NEED TO CHANGE OUR SECURITY GROUP RULES TO ALLOW RDP ANYWHERE. SO THAT WE CAN ACCESS THIS RDP OR ELSE WE CANNOT ACCESS OUR RDP.

2017-09-02 (5)

2017-09-02 (6)

2017-09-02 (7)

STEP 9: Now we can access our instance with the domain admin username and password.

2017-09-02 (8).png

STEP 10: In the instance go to Server Manager and we can see there are few services already installed. We already created Domain so there is no need to Install Active Directory Domain Services, DHCP, DNS and all the stuff. We just need to access those for that install “Remote Server Administrator Tools” under that “AD DS and AD LDS Tools”

2017-09-02 (9)

2017-09-02 (10)

After we install these we can access our Active Directory Services. Just play around create some users and groups.

2017-09-02 (11)

2017-09-02 (12)

In Active Directory Sites and Services, we can see there are 3 subnets and 2 servers which created according to our architecture.

2017-09-02 (13)

So, this is how we create Active Directory using AWS Directory Services.

CONCLUSION

In this lab, we learned how to create Active Directory infrastructure and then we worked on that by creating users and groups. To consider cost factor we need to turn off the instance which cannot be done in instances, for that we need to select “Auto Scaling Groups” and select ADDS Auto Scaling Group, edit it.

This slideshow requires JavaScript.

BUDGET

In this lab, we are going to use many Services but EC2 Service is going to charge us because we are using t2.large type. To run this infrastructure we can choose any type of instances, the cost of few instance types are below. Creating a template in CloudFormation is free when using Free Tier but be careful with the services which come with that template to maintain your credits. Always Terminate or Stop your instance after using it.

BUDGET FOR CLOUDFORMER INSTANCE

THANK YOU.

TROUBLESHOOTING PROBLEMS IN ALL LABS

ERROR TYPE 1: If you get any errors regarding your namespace like this for example “The type or namespace name ‘Amazon’ could not be found” these kinds of errors which may not relate to Amazon namespace it can be any namespace error.

SOLUTION: Troubleshooting these kinds of problems is very simple. Make sure you check the below steps and run the application.

STEP 1: Make sure you have installed required NuGet packages to your project if we add a new project we have to install NuGet packages again for that particular project.

STEP 2: Make sure all the NuGet packages are up to date.

If these steps are properly checked there would be no errors regarding your namespace.

In my blog, I have mentioned what all packages we require for that particular lab, check them and install.

ERROR TYPE 2: ‘ReceiveMessageRequest’ does not contain a definition for ‘AttributeName’ and no extension method ‘AttributeName’ accepting a first argument of type ‘ReceiveMessageRequest’ could be found (are you missing a using directive or an assembly reference?) OR ‘ReceiveMessageResult’ does not contain a definition for ‘Message’ and no extension method ‘Message’ accepting a first argument of type ‘ReceiveMessageResult’ could be found (are you missing a using directive or an assembly reference?)

SOLUTION: These are very simple errors which you will face because of change in the coding or you are using an upgrade version of Visual Studio. Just Add ”S” after “Message” and “AttributeName” that would solve this error. (“AttributeNames” and “Messages”) we can see them in one or two lines in the code find them and edit.

Example:response.ReceiveMessageResult.Message.Counttoresponse.ReceiveMessageResult.Messages.Count.

ERROR TYPE 3:  You will not get connected to your MySQL Workbench if there is a change in your location of work, from home you move to college to work on your project.

SOLUTION: Go to your RDS instance in your AWS Management Console, and go to Inbound Rules at the bottom in that select “RDP” Rule, under Source select “My IP” which will change the IP address. This is because of change in the network.

ERROR TYPE 4: When you face errors in your QueueServer in Lab 5, where the application which you have published will not work properly

SOLUTION:  The problem is in the step of 70. So please make sure you had installed IIS server including ASP.NET 4.5 (including developer stuff), HTTP connectors and windows authentication. Otherwise, the publish action can not be executed properly.

THANK YOU.