Study Guides and Actual Real Exam Questions For Oracle OCP, MCSE, MCSA, CCNA, CompTIA


Advertise

Submit Braindumps

Forum

Tell A Friend

    Contact Us

 Home

 Search

Latest Brain Dumps

 BrainDump List

 Certifications Dumps

 Microsoft

 CompTIA

 Oracle

  Cisco
  CIW
  Novell
  Linux
  Sun
  Certs Notes
  How-Tos & Practices 
  Free Online Demos
  Free Online Quizzes
  Free Study Guides
  Free Online Sims
  Material Submission
  Test Vouchers
  Users Submissions
  Site Links
  Submit Site

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Online Training Demos and Learning Tutorials for Windows XP, 2000, 2003.

 

 

 

 





Braindumps for "E10-001" Exam

Information Storage and Management Exam Version 2

 Question 1.
Which cache management algorithm is based on the assumption that data will not be requested by the host when it has not been accessed for a while?

A. LRU
B. HWM
C. LWM
D. MRU

Answer: A

Explanation:
Cache Management: Algorithms
Cache is a finite and expensive resource that needs proper management. Even though modern intelligent storage systems come with a large amount of cache, when all cache pages are filled, some pages have to be freed up to accommodate new data and avoid performance degradation. Various cache management algorithms are implemented in intelligent storage systems to proactively maintain a set of free pages and a list of pages that can be potentially freed up whenever required.
The most commonly used algorithms are discussed in the following list:
• Least Recently Used (LRU): An algorithm that continuously monitors data access in cache and identifies the cache pages that have not been accessed for a long time. LRU either frees up these pages or marks them for reuse. This algorithm is based on the assumption that data that has not been accessed for a while will not be requested by the host.
However, if a page contains write data that has not yet been committed to disk, the data is first written to disk before the page is reused.
• Most Recently Used (MRU): This algorithm is the opposite of LRU, where the pages that have been accessed most recently are freed up or marked for reuse. This algorithm is based on the assumption that recently accessed data may not be required for a while.
EMC E10-001 Student Resource Guide. Module 4: Intelligent Storage System

Question 2.
What does the area ID of the FC address identify?

A. Group of ports within a switch
B. An individual port within a fabric
C. Location of the name server within the fabric
D. Unique number provided to each switch in the fabric

Answer: A

Explanation:
FC Addressing in Switched Fabric
An FC address is dynamically assigned when a node port logs on to the fabric. The FC address has a distinct format, as shown in the slide. The first field of the FC address contains the domain ID of the switch. A Domain ID is a unique number provided to each switch in the fabric. Although this is an 8-bit field, there are only 239 available addresses for domain ID because some addresses are deemed special and reserved for fabric management services. For example, FFFFFC is reserved for the name server, and FFFFFE is reserved for the fabric login service. The area ID is used to identify a group of switch ports used for connecting nodes. An example of a group of ports with common area ID is a port card on the switch. The last field, the port ID, identifies the port within the group. 
Therefore, the maximum possible number of node ports in a switched fabric is calculated as: 
239 domains X 256 areas X 256 ports = 15,663,104
 

EMC E10-001 Student Resource Guide. Module 5: Fibre Channel Storage Area Network (FC SAN)

Question 3.
An organization performs copy on first access (CoFA) replication to create a local replica of application data. To perform a successful restore, what should be considered?

A. Source devices must be healthy
B. Save location size must be larger than the size of all source devices
C. Save location size must be equal to the size of all source devices
D. All changes to the source and replica must be discarded before the restore starts

Answer: A

Explanation:
Replication: Restore & Restart Considerations
Local replicas are used to restore data to production devices. Alternatively, applications can be restarted using the consistent point-in-time replicas. Replicas are used to restore data to the production devices if logical corruption of data on production devices occurs—that is, the devices are available but the data on them is invalid. Examples of logical corruption include accidental deletion of data (tables or entries in a database), incorrect data entry, and incorrect data updates. Restore operations from a replica are incremental and provide a small RTO. In some instances, the applications can be resumed on the production devices prior to the completion of the data copy. Prior to the restore operation, access to production and replica devices should be stopped. Production devices might also become unavailable due to physical failures, such as production server or physical drive failure. In this case, applications can be restarted using the data on the latest replica. As a protection against further failures, a “Gold Copy” (another copy of replica device) of the replica device should be created to preserve a copy of data in the event of failure or corruption of the replica devices. After the issue has been resolved, the data from the replica
devices can be restored back to the production devices.
Full-volume replicas (both full-volume mirrors and pointer-based in Full Copy mode) can be restored to the original source devices or to a new set of source devices. Restores to the original source devices can be incremental, but restores to a new set of devices are fullvolume copy operations.
In pointer-based virtual and pointer-based full-volume replication in CoFA mode, access to data on the replica is dependent on the health and accessibility of the source volumes. If the source volume is inaccessible for any reason, these replicas cannot be used for a restore or a restart operation.
EMC E10-001 Student Resource Guide. Module 11: Local Replication

Question 4.
Which host component eliminates the need to deploy separate adapters for FC and Ethernet communications?

A. Converged network adapter
B. TCP Offload Engine NIC
C. FCIP bridging adapter
D. iSCSI host bus adapter

Answer: A

Explanation:
Converged Network Adaptor (CNA)
 

A CNA provides the functionality of both a standard NIC and an FC HBA in a single adapter and consolidates both types of traffic. CNA eliminates the need to deploy separate adapters and cables for FC and Ethernet communications, thereby reducing the required number of server slots and switch ports. CNA offloads the FCoE protocol processing task from the server, thereby freeing the server CPU resources for application processing. A CNA contains separate modules for 10 Gigabit Ethernet, Fibre Channel, and FCoE Application Specific Integrated Circuits (ASICs). The FCoE ASIC encapsulate FC frames into Ethernet frames. One end of this ASIC is connected to 10GbE and FC ASICs for server connectivity, while the other end provides a 10GbE interface to connect to an FCoE switch.
EMC E10-001 Student Resource Guide. Module 6: IP SAN and FCoE

Question 5.
What is a function of unified management software in cloud computing?

A. Defining cloud service attributes
B. Consolidating infrastructure resources scattered across one or more data centers
C. Metering based on usage of resources by the consumer
D. Providing an interface to consumers to request cloud services

Answer: B

Explanation:
Cloud Management and Service Creation Tools
The cloud management and service creation tools layer includes three types of software:
• Physical and virtual infrastructure management software
• Unified management software
• User-access management software
This classification is based on the different functions performed by these software. These software interact with each other to automate provisioning of cloud services.
The physical and virtual infrastructure management software is offered by the vendors of various infrastructure resources and third-party organizations. For example, a storage array has its own management software. Similarly, network and physical servers are managed independently using network and compute management software respectively. These software provide interfaces to construct a virtual infrastructure from the underlying physical infrastructure. Unified management software interacts with all standalone physical and virtual infrastructure management software. It collects information on the existing physical and virtual infrastructure configurations, connectivity, and utilization. Unified management software compiles this information and provides a consolidated view of infrastructure resources scattered across one or more data centers. It allows an administrator to monitor performance, capacity, and availability of physical and virtual resources centrally. Unified management software also provides a single management interface to configure physical and virtual infrastructure and integrate the compute (both CPU and memory), network, and storage pools. The integration allows a group of compute pools to use the storage and network pools for storing and transferring data respectively. The unified management software passes configuration commands to respective physical and virtual infrastructure management software, which executes the instructions. This eliminates the administration of compute, storage, and network resources separately using native management software. The key function of the unified management software is to automate the creation of cloud services. It enables administrators to define service attributes such as CPU power, memory, network bandwidth, storage capacity, name and description of applications and platform software, resource location, and backup policy. When the unified management software receives consumer requests for cloud services, it creates the service based on predefined service attributes.

The user-access management software provides a web-based user interface to consumers. Consumers can use the interface to browse the service catalogue and request cloud services. The user-access management software authenticates users before forwarding their request to the unified management software. It also monitors allocation or usage of resources associated to the cloud service instances. Based on the allocation or usage of resources, it generates a chargeback report. The chargeback report is visible to consumers and provides transparency between consumers and providers.
EMC E10-001 Student Resource Guide. Module 13: Cloud Computing

Question 6.
Which EMC product provides the capability to recover data up to any point-in-time?

A. RecoverPoint
B. NetWorker
C. Avamar
D. Data Domain

Answer: A

Explanation:
EMC RecoverPoint
 

RecoverPoint is a high-performance, cost-effective, single product that provides local and remote data protection for both physical and virtual environments. It provides faster recovery and unlimited recovery points. RecoverPoint provides continuous data protection and performs replication between the LUNs. RecoverPoint uses lightweight splitting technology either at the application server, fabric, or arrays to mirror a write to a RecoverPoint appliance. The RecoverPoint family of products include RecoverPoint/CL, RecoverPoint/EX, and RecoverPoint/SE. RecoverPoint/CL is a replication product for a heterogeneous server and storage environment. It supports both EMC and non-EMC storage arrays. This product supports hostbased, fabric-based, and array-based write splitters. RecoverPoint/EX supports replication between EMC storage arrays and allows only array-based write splitting. RecoverPoint/SE is a version of RecoverPoint that is targeted for VNX series arrays and enables only Windows-based host and array-based write splitting. 
EMC E10-001 Student Resource Guide. Module 11: Local Replication

Question 7.
What is needed to perform a non-disruptive migration of virtual machines (VMs) between hypervisors?

A. Hypervisors must have access to the same storage volume
B. Physical machines running hypervisors must have the same configuration
C. Hypervisors must be running within the same physical machine
D. Both hypervisors must have the same IP address

Answer: A

Explanation:
VM Migration: Hypervisor-to-Hypervisor
 

In hypervisor-to-hypervisor VM migration, the entire active state of a VM is moved from one hypervisor to another. This method involves copying the contents of virtual machine memory from the source hypervisor to the target and then transferring the control of the VM’s disk files to the target hypervisor. Because the virtual disks of the VMs are not migrated, this technique requires both source and target hypervisor access to the same storage.
EMC E10-001 Student Resource Guide. Module 12: Remote Replication

Question 8.
Which iSCSI name requires an organization to own a registered domain name?

A. IQN
B. EUI
C. WWNN
D. WWPN

Answer: A

Explanation:
iSCSI Name
A unique worldwide iSCSI identifier, known as an iSCSI name, is used to identify the initiators and targets within an iSCSI network to facilitate communication. The unique identifier can be a combination of the names of the department, application, or manufacturer, serial number, asset number, or any tag that can be used to recognize and manage the devices. Following are two types of iSCSI names commonly used:
• iSCSI Qualified Name (IQN): An organization must own a registered domain name to generate
iSCSI Qualified Names. This domain name does not need to be active or resolve to an address. It just needs to be reserved to prevent other organizations from using the same domain name to generate iSCSI names. A date is included in the name to avoid potential conflicts caused by the transfer of domain names. An example of an IQN is iqn.2008-02.com.example:optional_string. The optional_string provides a serial number, an asset number, or any other device identifiers. An iSCSI Qualified Name enables storage administrators to assign meaningful names to iSCSI devices, and therefore, manage those devices more easily.
• Extended Unique Identifier (EUI): An EUI is a globally unique identifier based on the IEEE EUI- 64 naming standard. An EUI is composed of the eui prefix followed by a 16-character exadecimal name, such as eui.0300732A32598D26.
In either format, the allowed special characters are dots, dashes, and blank spaces.
EMC E10-001 Student Resource Guide. Module 6: IP SAN and FCoE

Question 9.
Which data center requirement refers to applying mechanisms that ensure data is stored and retrieved as it was received?

A. Integrity
B. Availability
C. Security
D. Performance

Answer: A

Explanation:
Information Security Framework
The basic information security framework is built to achieve four security goals, confidentiality, integrity, and availability (CIA) along with accountability. This framework incorporates all security standards, procedures and controls, required to mitigate threats in the storage infrastructure environment. Confidentiality: Provides the required secrecy of information and ensures that only authorized users have access to data. This requires authentication of users who need to access information. Data in transit (data transmitted over cables) and data at rest (data residing on a primary storage, backup media, or in the archives) can be encrypted to maintain its confidentiality. In addition to restricting unauthorized users from accessing information, confidentiality also requires to implement traffic flow protection measures as part of the security protocol. These protection measures generally include hiding source and destination addresses, frequency of data being sent, and amount of data sent.
Integrity: Ensures that the information is unaltered . Ensuring integrity requires detection and protection against unauthorized alteration or deletion of information. Ensuring integrity stipulate measures such as error detection and correction for both data and systems.
Availability: This ensures that authorized users have reliable and timely access to systems, data and applications residing on these systems. Availability requires protection against unauthorized deletion of data and denial of service. Availability also implies that sufficient resources are available to provide a service.
Accountability: Refers to accounting for all the events and operations that take place in the data center infrastructure. The accountability service maintains a log of events that can be audited or traced later for the purpose of security.
EMC E10-001 Student Resource Guide. Module 14: Securing the Storage Infrastructure

Question 10.
What describes a landing zone in a disk drive?

A. Area on which the read/write head rests
B. Area where the read/write head lands to access data
C. Area where the data is buffered before writing to platters
D. Area where sector-specific information is stored on the disk

Answer: A

Explanation:
Disk Drive Components
 

The key components of a hard disk drive are platter, spindle, read-write head, actuator arm assembly, and controller board. I/O operations in a HDD is performed by rapidly moving the arm across the rotating flat platters coated with magnetic particles. Data is transferred between the disk controller and magnetic platters through the read-write (R/W) head which is attached to the arm. Data can be recorded and erased on magnetic platters any number of times.
Platter: A typical HDD consists of one or more flat circular disks called platters. The data is recorded on these platters in binary codes (0s and 1s). The set of rotating platters is sealed in a case, called Head Disk Assembly (HDA). A platter is a rigid, round disk coated with magnetic material on both surfaces (top and bottom). The data is encoded by polarizing the magnetic area, or domains, of the disk surface. Data can be written to or read from both surfaces of the platter. The number of platters and the storage capacity of each platter determine the total capacity of the drive.
Spindle: A spindle connects all the platters and is connected to a motor. The motor of the spindle rotates with a constant speed. The disk platter spins at a speed of several thousands of revolutions per minute (rpm). Common spindle speeds are 5,400 rpm, 7,200 rpm, 10,000 rpm, and 15,000 rpm. The speed of the platter is increasing with improvements in technology; although, the extent to which it can be improved is limited.
Read/Write Head: Read/Write (R/W) heads, read and write data from or to platters. Drives have two R/W heads per platter, one for each surface of the platter. The R/W head changes the agnetic polarization on the surface of the platter when writing data. While reading data, the head detects the magnetic polarization on the surface of the platter. During reads and writes, the R/W head senses the magnetic polarization and never touches the surface of the platter. When the spindle is rotating, there is a microscopic air gap maintained between the R/W heads and the platters, known as the head flying height. This air gap is removed when the spindle stops rotating and the R/W head rests on a special area on the platter near the spindle. This area is called the landing zone . The landing zone is coated with a lubricant to reduce friction between the head and the platter. The logic on the disk drive ensures that heads are moved to the landing zone before they touch the surface. If the drive malfunctions and the R/W head accidentally touches the surface of the platter outside the landing zone, a head crash occurs. In a head crash, the magnetic coating on the platter is scratched and may cause damage to the R/W head. A head crash generally results in data loss.
Actuator Arm Assembly: R/W heads are mounted on the actuator arm assembly, which positions the R/W head at the location on the platter where the data needs to be written or read. The R/W heads for all platters on a drive are attached to one actuator arm assembly and move across the platters simultaneously.
Drive Controller Board: The controller is a printed circuit board, mounted at the bottom of a disk drive. It consists of a microprocessor, internal memory, circuitry, and firmware. The firmware controls the power to the spindle motor and the speed of the motor. It also manages the communication between the drive and the host. In addition, it controls the R/W operations by
moving the actuator arm and switching between different R/W heads, and performs the optimization of data access. 
EMC E10-001 Student Resource Guide. Module 2: Data Center Environment


Google
 
Web www.certsbraindumps.com


Study Guides and Real Exam Questions For Oracle OCP, MCSE, MCSA, CCNA, CompTIA





              Privacy Policy                   Disclaimer                    Feedback                    Term & Conditions

www.helpline4IT.com

ITCertKeys.com

Copyright © 2004 CertsBraindumps.com Inc. All rights reserved.