|
Question 1. Which three stages support the dynamic (runtime) definition of the physical column metadata? (Choose three.) A. the Sequential stage B. the Column Export stage C. the CFF stage D. the DRS stage E. the Column Import stage Answer: A, B, E Question 2. A Varchar (10) field named Source Column is mapped to a Char(25) field named Target Column in a Transformer stage. The APT_STRING_PADCHAR environment variable is set in Administrator to its default value. Which technique describes how to write the derivation so that values in Source Column are padded with spaces in Target Column? A. Include APT_STRING_PADCHAR in your job as a job parameter. Specify the C/C++ end of string character (0x0) as its value. B. Map Source Column to Target Column. The Transformer stage will automatically pad with spaces. C. Include APT_STRING_PADCHAR in your job as a job parameter. Specify a space as its value. D. Concatenate a string of 25 spaces to Source Column in the derivation for Target Column. Answer: C Question 3. Which three privileges must the user possess when running a parallel job? (Choose three.) A. read access to APT_ORCHHOME B. execute permissions on local copies of programs and scripts C. read/write permissions to the UNIX/etc directory D. read/write permissions to APT_ORCHHOME E. read/write access to disk and scratch disk resources Answer: A, B, E Question 4. In a Teradata environment, which stage invokes Teradata supplied utilities? A. Teradata API B. DRS Teradata C. Teradata Enterprise D. Teradata Multi load Answer: D Question 5. When importing a COBOL file definition, which two are required? (Choose two.) A. The file you are importing is accessible from your client workstation. B. The file you are importing contains level 01 items. C. The column definitions are in a COBOL copybook file and not, for example, in a COBOL source file. D. The file does not contain any OCCURS DEPENDING ON clauses. Answer: A, B Question 6. Which two tasks will create Data Stage projects? (Choose two.) A. Export and import a Data Stage project from Data Stage Manager. B. Add new projects from Data Stage Administrator. C. Install the Data Stage engine. D. Copy a project in Data Stage Administrator. Answer: B, C Question 7. Which three defaults are set in Data Stage Administrator? (Choose three.) A. default prompting options, such as Auto save job before compile B. default SMTP mail server name C. project level default for Runtime Column Propagation D. project level defaults for environment variables E. project level default for Auto-purge of job log entries Answer: C, D, E Question 8. Which three are keyless partitioning methods? (Choose three.) A. Entire B. Modulus C. Round Robin D. Random E. Hash Answer: A, C, D Question 9. Which two must be specified to manage Runtime Column Propagation? (Choose two.) A. enabled in Data Stage Administrator B. attached to a table definition in Data Stage Manager C. enabled at the stage level D. enabled with environmental parameters set at runtime Answer: A, C Question 10. Which three are valid ways within a Job Sequence to pass parameters to Activity stages? (Choose three.) A. Exec Command Activity stage B. User Variables Activity stage C. Sequencer Activity stage D. Routine Activity stage E. Nested Condition Activity stage Answer: A, B, D Question 11. A client requires that a database table be done using two jobs. The first job writes to a dataset. The second job reads the dataset and loads the table. The two jobs are connected in a Job Sequence. What are three benefits of this approach? (Choose three.) A. The time it takes to load the table is reduced. B. The database table can be reloaded after a failure without re-reading the source data. C. The dataset can be used by other jobs even if the database load fails. D. The dataset can be read if the database is not available. E. The data in the dataset can be archived and shared with other external applications. Answer: B, C, D Question 12. You are reading customer data using a Sequential File stage and transforming it using the Transformer stage. The Transformer is used to cleanse the data by trimming spaces from character fields in the input. The cleansed data is to be written to a target DB2 table. Which partitioning method would yield optimal performance without violating the business requirements? A. Hash on the customer ID field B. Round Robin C. Random D. Entire Answer: B Question 13. Which three are valid trigger expressions in a stage in a Job Sequence? (Choose three.) A. Equality (Conditional) B. Unconditional C. Return Value (Conditional) D. Difference (Conditional) E. Custom (Conditional) Answer: B, C, E Question 14. An Aggregator stage using a Hash technique processes a very large number of rows during month end processing. The job occasionally aborts during these large runs with an obscure memory error. When the job is rerun, processing the data in smaller amounts corrects the problem. Which change would correct the problem? A. Set the Combinability option on the Stage Advanced tab to Combinable allowing the Aggregator to use the memory associated with other operators. B. Change the partitioning keys to produce more data partitions. C. Add a Sort stage prior to the Aggregator and change to a sort technique on the Stage Properties tab of the Aggregator stage. D. Set the environment variable APT_AGG_MAXMEMORY to a larger value. Answer: C Question 15. Which three actions are performed using stage variables in a parallel Transformer stage? (Choose three.) A. A function can be executed once per record. B. A function can be executed once per run. C. Identify the first row of an input group. D. Identify the last row of an input group. E. Lookup up a value from a reference dataset. Answer: A, B, C Question 16. The source stream contains customer records. Each record is identified by a CUSTID field. It is known that the stream contains duplicate records, that is, multiple records with the same CUSTID value. The business requirement is to add a field named NUMDUPS to each record that contains the number of duplicates and write the results to a target DB2 table. Which job design would accomplish this? A. Send the incoming records to a Transformer stage. Use a Hash partitioning method with CUSTID as the key and sort by CUSTID. Use stage variables to keep a running count of the number of each new CUSTID. Add this count to a new output field named NUMDUPS then load the results into the DB2 table. B. Use a Modify stage to add the NUMDUPS field to the input stream then process the data via an Aggregator stage Group and Count Rows options on CUSTID with the result of the sum operation sent to the NUMDUPS column in the Mapping tab for load into the DB2 table. C. Use a Copy stage to split the incoming records into two streams. One stream goes to an Aggregator stage that groups the records by CUSTID and counts the number of records in each group and outputs the results to the NUMDUPS field. The output from the Aggregator stage is then joined to the other stream using a Join stage on CUSTID and the results are then loaded into the DB2 table. D. Use an Aggregator stage to group the incoming records by CUSTID and to count the number of records in each group then load the results into the DB2 table. Answer: C Question 17. A job contains a Sort stage that sorts a large volume of data across a cluster of servers. The customer has requested that this sorting be done on a subset of servers identified in the configuration file to minimize impact on database nodes. Which two steps will accomplish this? (Choose two.) A. Create a sort scratch disk pool with a subset of nodes in the parallel configuration file. B. Set the execution mode of the Sort stage to sequential. C. Specify the appropriate node constraint within the Sort stage. D. Define a non-default node pool with a subset of nodes in the parallel configuration file. Answer: C, D Question 18. You have a compiled job and parallel configuration file. Which three methods can be used to determine the number of nodes actually used to run the job in parallel? (Choose three.) A. within Data Stage Designer, generate report and retain intermediate XML B. within Data Stage Designer, show performance statistics C. within Data Stage Director, examine log entry for parallel configuration file D. within Data Stage Director, examine log entry for parallel job score E. within Data Stage Director, open a new Data Stage Job Monitor Answer: C, D, E Question 19. Which three features of datasets make them suitable for job restart points? (Choose three.) A. They are indexed for fast data access. B. They are partitioned. C. They use data types that are in the parallel engine internal format. D. They are persistent. E. They are compressed to minimize storage space. Answer: B, C, D Question 20. The last two steps of a job are an Aggregator stage using the Hash method and a Sequential File stage with a Collector type of Auto that creates a comma delimited output file for use by a common spreadsheet program. The job runs a long time because data volumes have increased. Which two changes would improve performance? (Choose two.) A. Change the Aggregator stage to a Transformer stage and use stage variables to accumulate the aggregations. B. Change the Sequential stage to a Data Set stage to allow the write to occur in parallel. C. Change the Aggregator stage to use the sort method. Hash and sort on the aggregation keys. D. Change the Sequential stage to use a Sort Merge collector on the aggregation keys. Answer: C, D
|
Question 1. Acme Computer Corp has implemented SAN Copy to fully copy 60 LUNs from a DMX to a CX700 with the default SAN Copy settings. The copy process is taking longer than expected. What are two [2] settings that would improve performance? A. Increase the number of concurrent sessions B. Increase write cache C. Increase the throttle value D. Increase the Reserve LUN Pool Answer: A, C Question 2. Click the Exhibit button. BRM Company was using the MirrorView configuration shown in the graphic. The company decided that it did not need two sites to mirror to. BRM relocated the CLARiiON on Site B to a location without connectivity on Site A. Now BRM wants to destroy the mirrors on this CLARiiON. What should be done? A. Expand Remote Mirrors in Navisphere Manager, expand the mirror, right click on the Secondary Image and select destroy B. This cannot be done without connectivity to the other MirrorView connected system C. Expand Remote Mirrors in Navisphere Manager, go into Engineering Mode, right click on the Mirror and select force destroy D. Expand Remote Mirrors in Navisphere Manager, right click on the Mirror and select destroy Answer: A Question 3. FM Corp has a CX500 in Boston. It is mirrored with MirrorView over IP to a CX500 at a remote site, 20 miles away in Newton. FM Corp would like to add an NS700G to both existing CX500s. Which statement is correct? A. An NS700G can be attached to each the Boston and Newton site B. An NS700G can be attached to the Boston CX500 but not the Newton site C. An NS700G is not supported in this configuration D. An NS700G can be attached to the Newton site but not the Boston site Answer: C Question 4. JR Ltd has two networks. The CLARiiONs are connected to a 128.2.1 network. All hosts are connected to JR's corporate network of 128.1.1. Both networks are Class C, routable, and have firewall protection. All hosts in Navisphere are unmanaged. How can a reporting error in the unmanaged host be fixed in Navisphere? A. Open TCP port 6389 on the firewall(s) B. Open TCP port 6390 on the firewall(s) C. Manually register the unmanaged hosts D. Open TCP port 443 on the firewall(s) Answer: A Question 5. The DLC Company is planning to install a CX700 with SnapView. DLC has purchased 25 73 GB 10 krpm disks. DLC plans to have a 150 GB data LUN on each of five 4+1 RAID 5 groups. Your task is to provision the Reserved LUNs for SnapView Snapshots which will be used for backups. How do you configure the LUNs? A. Bind all the Reserved LUNs on one RAID Group. SnapView will assign them to the Source LUNs B. Bind all the Reserved LUNs on one RAID Group, and assign them to the Source LUNs C. Bind the Reserved LUNs across RAID Groups. SnapView will assign them to the Source LUNs D. Bind all the Reserved LUNs across RAID Group, and assign them to the Source LUNs Answer: C Question 6. What is ALWAYS the best thing to do to optimize Reserve LUN Pool disk performance? A. Use both FC and ATA disks B. Use MetaLUNs for the Reserve LUN Pool C. Only use dedicated FC disks D. Use equal sized MetaLUNs Answer: C Question 7. FDD Coffee Company has an environment consisting exclusively of RAID 5 LUNs. FDD is interested in a local backup solution. The replica used for backup should be available to recover the Source LUN in the event of corruption. The company also wishes to use local replicas for testing. Testing had a read/write ratio of 95/5. Three replicas of each Source LUN will be made and presented to three test hosts. The testing should have as little impact as possible on the production environment. The company also wishes to minimize the space requirements for the new solution. What is the best solution? A. Use a Clone of each Source LUN for backup. Use 3 Snapshots of each Clone for testing B. Use 3 Clones of each Source LUN for testing, and a Snapshot of each Source LUN for backups C. Use 3 Snapshots of each Source LUN for testing, and an additional Snapshot of each Source LUN for backups D. Use 3 Clones of each Source LUN for testing, and an additional Clone of each Source LUN for backups Answer: A Question 8. Your customer, HLS, Inc, has written a script for its SnapView implementation. In the testing process, the customer finds that the backup host will not consistently see the Snapshot presented to it. The script includes the following lines: navicli -h SPA snapview -startsession sess1 -snapshotname snap1 navicli -h SPA snapview -activatesnapshot sess1 -snapshotname snap1 What would you recommend be done? A. Activate the Snapshot with admsnap on the production host B. Start the Session with admsnap on the production host C. Activate the Snapshot with admsnap on the backup host D. Start the Session with admsnap on the backup host Answer: C Question 9. LGC has been using MS SQL for its primary production database application. In the past, LGC has not experienced any major performance problems. Recently, the company added MirrorView/A to its CLARiiON CX500 arrays two reasons Disaster recovery Business continuance protection of its SQL environment Shortly thereafter, LGC's SQL database started to experience regular cycles of slower response. What is the most likely cause of this problem? A. MirrorView/A deltaset impact on SQL LUNs B. Intermittent network response problems C. Insufficient Reserve LUN Pool capacity D. SQL report queries are poorly designed Answer: A Question 10. Which four [4] Clone-related operations must wait until the clone reverse synchronization is complete? A. Add another Clone to the Clone Group B. Create a new Clone Group on the array C. Reverse synchronize any clone in the Clone Group D. Delete a different Clone Group on the array E. Remove the Clone that is reverse synchronizing F. Synchronize any Clone in the Clone Group Answer: A, C, E, F
Copyright © 2004 CertsBraindumps.com Inc. All rights reserved.