DATA NETWORK FUNDAMENTALS

DATA STORAGE AND DATA NETWORKING

There are many differences between storage and networking, there are shown in the picture.

DATA STORAGE AND NETWORKING

COMPARISION OF STORAGE AND NETWORKING DATA

 

DIRECT-ATTACHED STORAGE (DAS)

This is the standard architecture for storage models where application, file system and storage are connected in a sequence. It can be used for the only single user which is attached to the PCs.

 

DIRECT AS

DIRECT-ATTACHED STORAGE ARCHITECTURE

 NETWORK-ATTACHED STORAGE (NAS)

In this network model, the device is directly attached to the network where are files and storage will be shared with the network.

NETWORK AS

NETWORK-ATTACHED STORAGE ARCHITECTURE

STORAGE AREA NETWORK (SAN)

The storage appears to the network as another hard drive, that is shared among the system.

STORAGE AN

STORAGE AREA NETWORK ARCHITECTURE

 NAS VS SAN: –

  1. NAS handles file system and storage under one device, SAN handle its own network for storage
  2. NAS is simpler to manage.
  3. In NAS, storage appears to other computers as a file server. SAN storage appears as a disk drive.
  4. SAN is connected to LAN by a fabric layer.

NAS AND SAN: ACCESS: –

The files in NAS and SA are accessed differently, in NAS system its file access, where application gives files to NAS system. It’s simple to manage.

In SAN system it blocks access where all the files are managed outside the SAN system. It’s easy to configure and higher performance.

CONNECTIONS AND ATTACHMENTS

In NAS, all the clients are connected to the LAN and NAS system is attached to the network. both file and storage are in NAS system.

NAS CONNECTIONS

NETWORK CONNECTION FOR NAS

SAN, clients are connected to LAN and SAN network is attached to the LAN by the SAN server. We will have multiple storages in SAN network.

SAN2

NETWORK CONNECTION FOR SAN

INTERNET SMALL COMPUTER SYSTEM INTERFACE ( ISCSI)

This is the protocol used for SAN network, which is an IP-based standard SCSI. It is client-server protocol and contains the initiator, target. This protocol can send signals to longer distances. It’s less expensive because it uses standard cables and switches.

SERIAL ATTACHED SCSI (SAS)

SAS is serial attached SCSI. This protocol uses aerial cables and carries SCSI packets. The equipment is inexpensive and more reliable than iSCSI. It has a limited range and provides a limited number of connections.

SAS

SERIAL CABLES FOR SAS

FIBER CHANNEL (FC)

It has many topologies

  1. FC-P2P (POINT-TO-POINT): – fiber channel is connected from host server to client
  2. FC-AC (ARBITRATED LOOP): – within FC-AC all devices are within a loop and has more connection than P2P. failure of one device can cause a break in the loop
  3. FC-SW (SWITCHED FABRIC): – all devices includes host and client are connected to fiber channel.
  4. ZONING: – if a group of fiber channel ports. Fiber Channel ports in a group can only communicate with the ports in that group. There are two zoning-hard and soft. In hard zoning, zone members are defined by a fiber channel switch, physical hardware, and ports which also provides good security. soft zoning cable can be moved from one port to another without configuration of physical hardware.

ADVANTAGES OF FC: –

  1. Very reliable
  2. Scalable
  3. Flexible

FAS VS ISCSI VS SAS

There are differing based on the usage in small-medium scale enterprises.

FAS VS SAS VS ISCSI

COMPARISON BETWEEN iSCSI, SAS, FC

FILE STORAGE VS BLOCK STORAGE

In block storage, it treats data as standardized chunks of data and also provides greater flexibility and higher performance.

In file storage, it handles data in terms of files and also easier to deploy.

HIGH AVAILABLITY AND PERFORMANCE

To design storage solutions we must consider high availability and performance factors

  1. All data should be available at all times
  2. Redundancy
  3. Exceptionally large system.

COURSE COMPLETION CERTIFICATION BY NetApp

Data Network Fundamentals

CERTIFICATION BY NetApp

VIRTUALIZATION FUNDAMENTALS

WHAT IS VIRTUALIZATION?

To access higher utilization levels of their physical resources, virtualization will break the bond between physical layer and application layer.

SERVER VIRTUALIZATION

Before implementing virtualization there will be many servers which had their own storage, RAM, CPU, an operating system, and application. Each of these servers needs to be configured, maintained individually.

SERVER VIRTUALIZATION

SERVER VIRTUALIZATION

 

In virtualization, all the servers are combined into a single server which is operated by virtualized platforms like VMware ESX operating system. Utilization increases to 60%.

BENEFITS: – There are few benefits of server virtualization

  1. Lower server spend due to increased server utilization
  2. Decreased operating costs related to power cooling and real estate.
  3. Lower management costs
  4. Increased flexibility
  5. Higher availability

CHALLENGES: – There are few challenges for server virtualization even though it is cost efficient and has higher speeds.

  1. Hardware outages are more serious
  2. Backup of virtualized environment will be impossible with tradition method.
  3. Storage failure can wipe out numerous applications

STORAGE VIRTUALIZATION

Server virtualization doesn’t have any impact on costs regarding storage virtualization. So, we require storage virtualization because of the costs and growth in storage spaces. When customers used VMware network storage the costs are reduced by 55% than using any typical DAS server environment.

STORAGE COSTS

STORAGE COSTS COMPARISION

 

NETAPP STORAGE VIRTUALIZATION

NetApp is the storage virtualization vendor and provides storage virtual with many benefits like: –

  1. It does what VMware do for servers
  2. Pooled storage resources
  3. Multiprotocol
  4. Higher utilization
  5. Lower costs

STORAGE PROBLEMS AND SOLUTIONS

PROBLEM: – storage reliability is the main problem for storage virtualization. Storage failure can cause downtime for many users

SOLUTION: – NetApp storage virtualization provides more reliable storage by using RAID-DP which is superior data protection to RAID-5. It is significantly less expensive than RAID-10.

PROBLEM: – backup of a virtualized environment could be impossible if we use traditional backup window.

SOLUTION: – NetApp provides snapshot copies which copies only want data and its very faster and flexible backup. This will decrease the time for backup. Many snapshot copies can be kept for more flexibility. It provides instant restoration with no performance degradation and less storage space needed.

ADDITIONAL BENEFITS

Utilization is also very beneficial by using NetApp storage virtualization. Applications and Virtual machines are allocated space on specific drives. There will be less space available or more space required depending on the company.

UTILIZATION1

NETAPP STORAGE VIRTUALIZATION ARCHITECTURE

 

If there are three disks allocated for storage disk 1, disk 2, disk 3, then without storage virtualization the storage space will be less for some storage devices or some may require more storage space.

When we use NetApp storage all the disks are pooled resources so that any applications required storage will store from the pool of resources. It has higher utilization and consumes less power.

UTLIZATION2

BEFORE AND AFTER NETAPP STORAGE VIRTUALIZATION

 

STORAGE VIRTUALIZATION WITH A-SIS DEDUPLICATION

A-SIS DE

A-SIS DEDUPLICATION ARCHITECTURE

 

In the virtual environment, all the virtual machines have their own operating systems, RAM and applications. But by using A-SIS deduplication all the Virtual machines will combine by sharing common operating system, application, and software. This will reduce the storage space for users.

BENEFITS OF NETAPP STORAGE

There are many companies which provide storage virtualization but the main benefits for using NetApp are

  1. Multiprotocol architecture
  2. Pooled storage
  3. Disaster recovery more accessible
  4. Lower overall TCO
  5. Investment protection.

COURSE COMPLETION CERTFICATION BY NetApp

 

Virtualization

CERTIFICATION FOR NetApp

 

CLOUD FUNDAMENTALS

WHAT IS CLOUD COMPUTING?

Storing and accessing data resources from a virtual system over the Internet instead of physical devices like computer hard drives is defined as cloud computing. We have different opinions and definitions for cloud computing lets discuss how NetApp CIO opinion on cloud computing.

There are many cloud services in the market. Users might think of the cloud as Software as a Service (SaaS) hosting which is expensive but the cloud is a cost-efficient solution for any enterprise. Cloud will be very helpful for any organizations due to scalability and flexibility of cloud usage. We can match the cost with the demands in business issues. It’s fast and we can try new things without using any new infrastructure.

According to National Institute of Standards and Technology which is a US organization which is responsible for standards and rules for US federal agencies they define cloud as- cloud computing will help to provide convenient, on-demand network access to a shared pool of computing resources with minimal management effort for rapid provision.

Cloud Computing

CLOUD COMPUTING

IMPORTANCE OF CLOUD COMPUTING

For a business, we should consider the speed, growth, cost and consistency with users. Cloud responses to all these solutions.

For speed- cloud provides service delivery (self-service) which manages services on-demand with fast and in a simple way.

For growth- on-demand resources (elastic) means all resources are allocated dynamically on the requirement based services.

For the cost- cost effective solution (Metered) instead of buying large infrastructure we can use the cloud for services.

For consistency- reduced risk, the cloud will increase availability and reduces risk.

Cloud overview

CLOUD OVERVIEW

DELIVERY METHODS

There are three delivery methods for cloud computing: –

INFRASTRUCTURE AS A SERVICE (IaaS): – in this type, we will use basic infrastructures like compute, network and storage systems which can be VM’s, block storage, firewall and even sometimes we will also use software platforms. These services are hosted by cloud. They are used to provide core characteristics like to scale and are self-provisioned. Examples: Rackspace, Amazon EC2, Amazon S3 etc.

PLATFORM AS A SERVICE (PaaS): – these services are built inside the hardware which provides a platform with required applications likes databases, web servers, programming language framework and the environment. I will change according to the applications which are in demand. Examples: Heroku, OpenShift etc.

SOFTWARE AS A SERVICE (SaaS): – this type of service will expand the infrastructures and platforms to provide direct functional access to the application and its capabilities. The process of scaling and managing the resources is hidden from the consumer. Examples: Salesforce, ServiceNow etc.

IT AS A SERVICE (ITaaS): – it’s nothing but running IT like a business and optimizing IT infrastructures according to business. There are many new technological models, consumption models that offer both internal and external service. They create new policies like bring your own device (BYOD) to reduce cost computations and also to simplify the needs of users. New operational models will implement new technical skills and roles. These models will give us the future of IT as competitive services providing good needs to the users.

DEPLOYMENT MODELS

There are three deployment models for cloud computing: –

OPEN PUBLIC: – cloud resources are typically rendered over the internet or an open network. public cloud is typically the owned and operated by commercial service providers who own and offer access to consumers. Examples: AWS, Rackspace, and Salesforce.

ENTERPRISE PRIVATE: – cloud services which are built specifically for an entity ( group, organization, company). infrastructure can be hosted internally or externally and also managed internally or by third-party.

HYBRID: – a combination of one or more public or private cloud, bound together by a common fabric. This model allows consumers options to meet their business requirements.

CHARACTERISTICS OF CLOUD

There are five characteristics of cloud computing: –

SELF-SERVICE: – with a single portal or developer API companies can manage servers, systems, and resources. Resources are available to the consumers which are indicated by on-demand component.

BROAD NETWORK ACCESSIBILITY: – cloud services are provided consistently and are available and accessed through many platforms like laptops, desktops, mobile devices.

SHARED: – cloud resources are pooled at a single place and have the maximum efficiency to share among many consumers. This will optimize resource allocation and cost.

RAPID ELASITIY: – resources can be dynamically changed by scaling up and down. We can add or remove hosts by scaling in and out. This explains that cloud is very elastic to the consumers. They are scaled based on usage.

MEASURED METERED: – usage is calculated by what is consumed by the users. It’s like pay and use model where we use it in our daily lives like paying and using our water, electricity.

CONSIDERATIONS WITH CLOUD

CONSIDERTATION FOR CLOUD

CLOUD SECURITY 

IT provides different types of the cloud to the users there are few considerations with cloud because it is not a perfect solution for all industries. There are few services which meet business goals. We need to select the specific cloud type to provide good security and governance. There are five areas we must consider for cloud usage: –

SECURITY AND RISKS: – this is the major aspect each company should look at cloud selection. There are many cloud vendors in the market but we need to look for cloud deployment which suits our company.

PRIVACY: – company data will be stored on other servers which are operated by third-party cloud providers. There are few risks based on privacy: –

  1. Limited control
  2. Inadequate security
  3. System breaches
  4. Compromised data
  5. Legal problems

COMPLIANCE: – compliance will be increased due to cost and may require continuous monitoring of cloud audit. This may relate to IT security and other procedures. Other considerations include: –

  1. Audit and compliance risk
  2. Security risks
  3. Information risks
  4. Billing risks contract risks

VENDOR LOCK-IN: – to support business continuity and ensure seamless migration between cloud services. It can be done by

  1. Property and incompatibility
  2. Inefficient processes
  3. Contract constraints

Fear of vendor lock-in is a major impediment to cloud service adoption. The complexity means that many customers stay with a provider that doesn’t meet their needs

PERFORMANCE: – cloud provider must be sure of providing the right level of performance and service quality. This can be done by monitoring, the location of cloud, provider capabilities and resource disparity.

COMPLETION OF COURSE CERTIFICATE BY NetApp

Cloud Fundamentals

CERTIFICATE BY NETAPP

 

 

STORAGE FUNDAMENTALS

THINGS TO CONSIDER FOR STORAGE IMPLEMENTATION

DATA GROWTH: – When the organization is growing then the data will also have a good step towards growth. All the data is in electronic format which will consume a lot of storage area, we need to find an alternative for this problem because its growth is in unpredictable rate which can cause the problem for any organization.DataGrowth             DATA FROM DIFFERENT SOURCES IN ELECTRONIC FORMAT

There are different types of data which are created by human and machine resources. Emails and files which are scanned will be human made data, machine-made data will be medical imaging systems, telecommunications, utility equipment and financial transactions. Currently, the data storage capacity for large organizations in different fields like healthcare, media, financial and various digital industry are using petabyte-sized storage which is not sufficient due to rapid growth in size, quantity, and lifespan. Managing the data across geographical locations will be challenging because we need to use data in a fast-paced environment with a continuous flow of data.DataGrowth1                                           DIFFICULTY IN MANAGING DATA

DATA STORAGE SOLUTIONS: – there are few solutions to save the vast amount of data. We can use disks, disk arrays, just bunch of disks (JBOD) and intelligent storage systems.

  1. Disks: – These are used for desktops computers (3.5 inches) and laptops (2.5 inches). It uses rapidly rotating platters which are coated with the magnetic material used for storing and retrieving digital information.
  2. Disk Arrays: – Collection of disks to use in a redundant manner that is controlled by firmware. It contains cache memory, a Redundant array of independent disks (RAID) and virtualization. Advantages: – Availability, resiliency, and maintainability by using controllers, fans and power supplies.
  3. Just Bunch of Disks (JBOD): – They are not in RAID configuration, without any pooling or structuring we can use in servers as storage. They can be used as single logical volume or separate logical volume. Used in Windows and Linux which have software volume management.
  4. Intelligent Storage Systems: –  There are 4 key components- front-end, cache, back-end and physical disks. First, the Input/output request received from the host at the front-end will be sent through the cache and the back-end to enable storage and retrieval of data from the physical disks. The cache can respond to read request if the data is found in the cache.
Intelligentstoragesystem

INTELLIGENT SYSTEM STORAGE

FACTORS TO CONSIDER FOR STORAGE SOLUTION: –

There are many storage solutions in the market but there are few factors where the users consider to suit their needs.

APPLICATIONS: – There are certain applications which are useful for business, data will be required to store for those applications. For example, consider Amazon Pvt Ltd which has introduced an application for online services so for this we need additional storage.

DATA PROTECTION: – data should be protected from any methods of destruction like disks data may be lost, disks may burn or due to any natural disasters. We need built-in protection for data lost but for any physical damage, we need to implement disaster recovery strategies. We need to create backup disks in some other geographical locations. For example, tape backups can be periodically made and sent to any other geographical locations simultaneously storing in two different locations.

DATA AVAIBILITY: – data must be available always, this can be done by data protection and making it available as soon as possible. To make data available there are few ways like hardware redundancy, application availability and disaster recovery systems. In hardware redundancy, the data is replication in two different storage systems, if one fails then the data can be taken from another storage system so that user can access data immediately, it’s called high availability configuration. Sometimes, even the disk fails the applications should be available that type is called as application availability, whereas in some businesses such as trading or financial recovery is not a good option they should have disaster recovery plans.

DATA SECURITY: – data should be protected from intrusion caused by outside environment, sometimes internal security breaches are caused so we need data to transmit from on storage system to other. The other ways are integrating storage system with security implementations. Data should be encrypted for protection and for decryption we need to provide the key.

SCALABILITY: – as organizations grow the data storage systems should also grow to match the organization needs. One way to do this is by implementing storage grip so that all the storage systems relate to each other and can be accessed within the grip.

PERFORMANCE: – which involves the throughput, response time, capacity and reliability of storage systems. The performance cannot be connected with isolation when there is fast storage system but slow servers the performance will be lowered.

COST: – the cost of the storage system includes not only the cost of the system but also the cost of maintenance of the system. Cost efficient solution will be simple to manage, reliable and higher availability.

STORAGE TECHNOLOGIES

There are few storage technologies like direct attached storage, network attached storage, storage virtualization, flash storage and cloud computing we will discuss each topic individually.

DIRECT-ATTACHED STORAGE (DAS): – it is digital storage system which is attached without any network storage in between. It can be individual disk from the server or client, a group of disks within the server. The major protocols used for DAS are ATA, SATA, SCSI,FC, SAS which will be discussed in next module.

DAS

DIRECT-ATTACHED STORAGE ARCHITECTURE

Best suited for single servers or few servers. For example, small businesses don’t need to share data with long distance enterprises. DAS allows application servers to give the best performance. It is also cost-effective storage system for small office house office (SOHO) network. for organizations with rapid growthDAS is not suitable for its scalability. So the implementation of NAS and SAN becomes important for these type of storage systems, from both cost perspective and administration perspective.

NETWORK-ATTACHED STORAGE (NAS): – it is a file based storage system that makes data available in network models. It uses CIFS, NIFS protocols for file services. Servers don’t store the data we will have separate NAS storage system so that servers can perform at high speeds.

Users access the data from Ethernet switch which connects NAS storage system and appears as a file server with an IP address. It consists disks array or RAID technology to protect against disk failure for additional storage external storage can be attached.

It’s east to install, deploy and manage used in organizations for cost-effective and can be accessed by many devices. It’s good for consolidation of DAS resources for better utilization because DAS system will use only half of the full capacity but NAS, the utilization rate is high because storage is shared across multiple servers and used completely.

NAS

NETWORK-ATTACHED STORAGE ARCHITECTURE

STORAGE AREA NETWORKS (SAN): – is a block base data storage system which makes available over the network using FC, FCoE and ISCSI protocols. There are four different layers- client layer, server layer, fabric layer, and the storage layer.

It is used to move large blocks of data to single storage devices, used for databases, imaging and transaction processing.

SAN

STORAGE AREA NETWORK ARCHITECTURE

There are different types of SANs:-

  1. FIBRE CHANNEL (FC) SAN: – the default protocol for SAN environment with high-speed network technology used for storage. It contains host bus adaptors (HBA) which are connected directly to the servers and also be connected to clients using FC switch. The FC switch can detect failed connections and reroute data to the correct devices. The benefits of FC SAN are a fast backup and restore, improve business continuance, high availability, and storage consolidation.
  2. FC OVER ETHERNET (FCoE) SAN: – it combines FC protocol and an enhanced 10-Gigabit ethernet. This type is used for good performance quality and same service as FC. It also eliminates the need for two different data center networks reducing network cost and complexity.
  3. ISCSI SAN: – it’s an IP-based storage access protocol. The main components are ISCSI initiators and ISCSI targets. Initiators will send ISCSI command sequences and targets are storage devices with ISCSI enabled. It uses standard Ethernet cables, adapters, and switches.

STORAGE VIRTUALIZATION: –  storage virtualization is done by storing all the data from different physical devices to a single virtual space, where the users can request the storage space from the pool of disks available at virtual space.

Storage Virtualization

STORAGE VIRTUALIZATION ARCHITECTURE

There are some benefits for using storage virtualization: –

  1. Increase Utilization: – by storing in a virtual environment the users can use the pool of storage available and is easy to manage for the administrator. This will increase storage utilization.
  2. Simplify Storage Management: in virtualized storage, the administrator can manage the storage from a single device without configuring many physical devices. The storage can be monitored easily if any updates or errors occur it can be managed from a single physical device. It can be thinly provisioned or dynamically provisioned.
  3. Increase Flexibility: it is easy to send or migrate storage disk from one geographical location to another without making any adjustments to the applications. Even for disaster recovery process, it will be easy to back up the storage system in the virtual

ADVANCED STORAGE TECHNOLOGIES

  • FLASH STORAGE: – it is used nowadays for storage systems. It offers good speed, uses less electricity and it consumes less power to read data faster. Flash storage consists of a memory unit, access controller. Memory unit will store data and access controller will manage and control the free space in the storage. It is flexible, cost-efficient and optimizes storage.
  • CLOUD COMPUTING: – virtual servers are used to access applications and used in data centers to manage everything from a single physical device. It  can be moved from one location to another without any change for physical devices.

There are three types of cloud computing:-

  1. PRIVATE CLOUD: – this type of cloud is operated by a single organization and used within that organization or by a third-party. It is very secure and easy to control but the company needs to spend more money in buying software and infrastructure. It’s best for organizations that depend on their own data and applications, requires a secure and private network.
  2. PUBLIC CLOUD: all the services and infrastructures are available to public use. It is best used for sharing resources effectively, and it’s very cost-efficient which doesn’t require any cost for software is available over the internet. But this is not secure to share any personal details its vulnerable throughout the internet and can be accessed by anyone.
  3. HYBRID CLOUD: it is a combination of both public and private cloud where users can maintain the data more efficiently possible by spreading data in the hybrid cloud. Organizations have to keep an eye on all the secure platforms which don’t mix with public users and need to allow only required data access to the public.

COURSE COMPLETION CERTIFICATION BY NetApp

 

Storage Fundaamentals

CERTIFICATION BY NetApp