Ebook storage information management




















He is an internationally recognized authority in information management. His consulting work has included many of the Global and numerous midmarket companies. His teams have won several best practice competitions for their implementations and many of his clients have gone public with their success stories.

His strategies form the information management plan for leading companies in various industries. William is a very popular speaker worldwide and a prolific writer with hundreds of articles and white papers published.

William is a distinguished entrepreneur, and a former Fortune 50 technology executive and software engineer. He provides clients with strategies, architectures, platform and tool selection, and complete programs to manage information.

A DBMS optimizes the storage and retrieval of data. These core elements are typically viewed and managed as separate entities, but all the elements must work together to address data processing requirements.

Figure shows an example of an order processing system that involves the five core elements of a data center and illustrates their functionality in a business process. Figure Example of an order processing system 1. It is necessary to have a reliable infrastructure that ensures data is accessible at all times. The various technologies and solutions to meet these requirements are covered in this book.

The inability of users to access data can have a significant negative impact on a business. In addition to the security measures for client access, specific mechanisms must enable servers to access only their allocated resources on storage arrays. Business growth often requires deploying more servers, new applications, and additional databases.

The storage solution should be able to grow with the business. The infrastructure should be able to support performance requirements. Any variation in data during its retrieval implies cor- ruption, which may affect the operations of the organization.

When capacity requirements increase, the data center must be able to provide additional capacity with- out interrupting availability, or, at the very least, with minimal disruption. Capacity may be managed by reallocation of existing resources, rather than by adding new resources.

Manageability can be achieved through automation and the reduction of human manual intervention in com- mon tasks. The aspects of a data center that are monitored include security, performance, accessibility, and capacity. Reporting tasks help to establish business justifications and chargeback of costs associated with data center operations. Provisioning activities include capac- ity and resource planning. Resource planning is the process of evaluating and identifying required resources, such as personnel, the facility site , and the technology.

Resource planning ensures that adequate resources are available to meet user and application requirements. If uti- lization of the storage capacity is properly monitored and reported, business growth can be understood and future capacity requirements can be anticipated. This helps to frame a proactive data management policy. Duplication of data to ensure high availability and repurpos- ing has also contributed to the multifold increase of information growth. The value of information often changes over time.

Framing a policy to meet these challenges involves understanding the value of information over its lifecycle. When data is first created, it often has the highest value and is used frequently. As data ages, it is accessed less frequently and is of less value to the organization.

Understanding the information lifecycle helps to deploy appropriate storage infrastructure, according to the changing value of information.

For example, in a sales order application, the value of the information changes from the time the order is placed until the time that the warranty becomes void see Figure The value of the information is highest when a company receives a new sales order and processes it to deliver the product.

After order fulfillment, the customer or order data need not be available for real-time access. The company can transfer this data to less expensive second- ary storage with lower accessibility and availability requirements unless or until a warranty claim or another event triggers its need. After the warranty becomes void, the company can archive or dispose of data to create space for other high-value information. Data centers can accomplish this with the optimal and appropriate use of storage infrastruc- ture.

An effective information management policy is required to support this infrastructure and leverage its benefits. Information lifecycle management ILM is a proactive strategy that enables an IT organization to effectively manage the data throughout its lifecycle, based on predefined business policies.

This allows an IT organization to optimize the storage infrastructure for maximum return on investment. ILM should be implemented as a policy and encompass all business applications, processes, and resources. Each tier has different levels of protection, performance, data access frequency, and other considerations. Information is stored and moved between different tiers based on its value over time.

For example, mission-critical, most accessed information may be stored on Tier 1 storage, which consists of high performance media with a highest level of protection. Medium accessed and other important data is stored on Tier 2 storage, which may be on less expensive media with moderate performance and protection.

Rarely accessed or event spe- cific information may be stored on lower tiers of storage. Figure illustrates a three-step road map to enterprise-wide ILM. Steps 1 and 2 are aimed at implementing ILM in a limited way across a few enterprise-critical applications.

In Step 1, the goal is to implement a storage net- working environment. Storage architectures offer varying levels of protection and performance and this acts as a foundation for future policy-based informa- tion management in Steps 2 and 3. The value of tiered storage platforms can be exploited by allocating appropriate storage resources to the applications based on the value of the information processed. Step 2 takes ILM to the next level, with detailed application or data classification and linkage of the storage infrastructure to business policies.

These classifica- tions and the resultant policies can be automatically executed using tools for one or more applications, resulting in better management and optimal allocation of storage resources. Step 3 of the implementation is to automate more of the applications or data classification and policy management activities in order to scale to a wider set of enterprise applications.

As a result, resources are not wasted, and complexity is not introduced by managing low-value data at the expense of high-value data. This chapter also emphasized the importance of the ILM strategy, which busi- nesses are adopting to manage information effectively across the enterprise. ILM is enabling businesses to gain competitive advantage by classifying, protecting, and leveraging information.

The evolution of storage architectures and the core elements of a data center covered in this chapter provided the foundation on information storage.

The next chapter discusses storage system environment. A hospital uses an application that stores patient X-ray data in the form of large binary objects in an Oracle database. Storage array provides storage to the UNIX server, which has 6 terabytes of usable capacity. What are the typical challenges the storage management team may face in meeting the service-level demands of the hospital staff?

An engineering design department of a large company maintains over , engineering drawings that its designers access and reuse in their current projects, modifying or updating them as required. The design team wants instant access to the drawings for its current projects, but is currently constrained by an infrastructure that is not able to scale to meet the response time requirements.

The marketing department at a mid size firm is expanding. IT has given marketing a networked drive on the LAN, but it keeps reaching capacity every third week. Current capacity is gigabytes and growing , with hundreds of files. Users are complaining about LAN response times and capacity. As the IT manager, what could you recommend to improve the situation? A large company is considering a storage infrastructure—one that is scal- able and provides high availability.

More importantly, the company also needs performance for its mission-critical applications. No warranty may be created or extended by sales or promotional materials.

The advice and strategies contained herein may not be suitable for every situation. This work is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional services. Invista Operation Invista Advantages Rainfinity Rainfinity Components Rainfinity Operations Global Namespace Management Rainfinity Advantages Summary III.

Business Continuity Introduction to Business Continuity Information Availability Causes of Information Unavailability Measuring Information Availability Consequences of Downtime BC Terminology BC Planning Lifecycle Failure Analysis Single Point of Failure Fault Tolerance Multipathing Software Business Impact Analysis BC Technology Solutions PowerPath Features Dynamic Load Balancing Automatic Path Failover Path Failure without PowerPath Backup and Recovery Backup Purpose Disaster Recovery Operational Backup Archival Backup Considerations Backup Granularity Recovery Considerations Backup Methods Backup Process Backup and Restore Operations Backup Topologies Serverless Backup Backup in NAS Environments Backup Technologies Backup to Tape Physical Tape Library Limitations of Tape Backup to Disk Virtual Tape Library NetWorker Backup Operation NetWorker Recovery EmailXtender DiskXtender Avamar Local Replication Source and Target Uses of Local Replicas Data Consistency Consistency of a Replicated File System Consistency of a Replicated Database Local Replication Technologies Host-Based Local Replication LVM-Based Replication File System Snapshot Storage Array—Based Replication Full-Volume Mirroring Pointer-Based, Full-Volume Replication Pointer-Based Virtual Replication Restore and Restart Considerations Tracking Changes to Source and Target Creating Multiple Replicas Management Interface Clone Operation EMC SnapView SnapView Snapshot EMC SnapSure Remote Replication Modes of Remote Replication Remote Replication Technologies Host-Based Remote Replication Host-Based Log Shipping Storage Array-Based Remote Replication Synchronous Replication Mode Asynchronous Replication Mode Disk-Buffered Replication Mode Three-Site Replication Network Infrastructure DWDM SONET SRDF Family EMC MirrorView MirrorView Operations Summary IV.

Storage Security and Management Securing the Storage Infrastructure Storage Security Framework Risk Triad Assets Threats Vulnerability Storage Security Domains Securing the Application Access Domain Controlling User Access to Data Protecting the Storage Infrastructure Data Encryption Securing the Management Access Domain Controlling Administrative Access Protecting the Management Infrastructure Security Implementations in Storage Networking SAN SAN Security Architecture LUN Masking and Zoning Switch-wide and Fabric-wide Access Control NAS Authentication and Authorization Kerberos Network-Layer Firewalls IP SAN Managing the Storage Infrastructure Monitoring the Storage Infrastructure Parameters Monitored Components Monitored Hosts Storage Network Storage Monitoring Examples Accessibility Monitoring Capacity Monitoring Performance Monitoring



0コメント

  • 1000 / 1000