DBA Interview Questions

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

What are methods to check for physical or logical corruption of the database?

RMAN VALIDATE, Oracle Recovery Advisor, including List failure and Advise failure Also, DB/verify Utility

Where is the password file located and what would you use to create it?

$ORACLE_HOME/dbs and orapwd utility

What is the Process of enabling Archivelog Mode?

*** Short Answer *** Shut down the database instance Backup the database, Perform any operating system specific steps (optional), Startup a new instance in mount mode, do not open the database, Put the database into archivelog mode, Open the database, Verify your database is now in archivelog mode. *** Long Answer*** To Switch your database to archive log Mode first you must mount your database in order to switch on or off archive log mode Then using SQLPLUS you can use the SQL statement alter database noarchivelog;, to turn off archive log mode, or alter database archivelog;, to turn on archivelog mode. You can Confirm whether Archivelog mode is on or off in SQLPlus by using SQL statement archive log list;, or by querying the v$database Performance View. You can use RMAN to alter the database in archivelog mode, however, you cannot see the change using the command 'archive log list' so in this case use SQLPLUS So to recap, Shut down the database instance, Backup the database, Perform any operating system specific steps (optional), Startup a new instance in mount mode, do not open the database, Put the database into archivelog mode, Open the database, Verify your database is now in archivelog mode.

Sum up everything you do as a DBA into 3 categories?

***Pick 3*** Performance tuning Project implementation Requirement gathering User Administration Database Monitoring and Administration

What is the scan listener in RAC?

***Right Answer*** The scan listener sits on top of the whole stack in a RAC environment, so you only need the scan hostname that has access to all the nodes and local listeners. So once we put the scan hostname in the TNSnames of the oracle client, the scan listener will decide which node the user will be directed towards So depending on availability and load balancing, client connections are automatically directed to the appropriate nodes. (*** DON'T Say unless you want to because this is usually not a good idea but DBAs do it all the time***) The typical everyday user won't care which node they connect to, but the DBA may want to connect to a specific node depending on the situation. Like when performing a data pump they will need to know which specific node they are in, so they would not connect using the scan listener but by using the hostname for the specific node they want to connect to.

How is the Standby Database kept in sync with the Primary Database?

***Short Answer*** Redo Data from the Primary database is shipped from the Primary database to the Standby database. That data is then written to the Standby redo logs of the Standby database and then applied from those redo logs to the Standby database. Use the Water Tank analogy ***Long Answer*** Redo data or new redo entries from the Primary database are captured from the LNS process (The Log Writer Network Server process) of the primary database The LNS process is a background process that periodically checks for new redo entries from the redo log buffer or log writer (LGWR) process. The LNS process can also capture redo data from Online redo logs if the LNS process is not fast enough to capture the redo information before it goes to online redo log files That redo data is then shipped through the Oracle Net to the RFS process (The Remote File Server process) on the Standby database. The RFS process then writes that redo data to standby redo logs of the Standby database in real-time If the redo data from the Primary database gets written to the redo logs, and then gets written to the Archived logs before the LNS process can capture that data, The RFS process can get that redo data from those archived redo logs through the Archiver process of Primary (ARCn) The redo data that has arrived in the standby redo logs can then go to the Archived logs of the Standby database, or a background process called MRP (The Managed Recovery Process) will apply the redo data that has arrived in the standby redo logs to the Standby database in real-time And from there, our Primary and Standby databases are now in SYNC with one another. Use the Water Tank analogy

Why do you want to work for this company? Why are you looking for a career change?

***looking for growth*** Presently I am looking for an opportunity where I can handle more challenges and grow my skill set. This position with (Company Name) is one I am excited about because of the large client base and company size that could expand my technology and management skills. ***vague short answer*** I am looking for a change of environment and more remote opportunities.

What is RAC (Real Application Cluster)? Give us a brief overview of what RAC is?

**Short Answer** The purpose of RAC is for high availability to your data/database A RAC is Multiple instances on different servers accessing the same database for high availability This gives us the ability for Instance failover, and load balancing **Long Answer showing experience** Oracle Real Application Cluster (RAC) is the configuration intended for High Availability where multiple servers are setup as instances to access the same database thereby providing for instance failover and load balancing. I have extensive experience maintaining, supporting, and troubleshooting RAC issues I am currently supporting a 3-node RAC on LINUX OS

What are 3 things that you are not? The reverse of what are 3 things that you are or what are your strengths?

1) I am not easily intimidated by problems or challenges, I like to work thru them and overcome them, which I think is a great pleasure for people who work in the IT business management space. 2) I am not a rigid person, I am very flexible and easy to work with. I'm not someone who has a one-track mind on all things, who can't adapt or take on new ways of doing things. 3) I am not impatient. I like to be detail-oriented and make sure everything is going as planned, and if it isn't then I am able to take a step back and take a new direction if needed. 4) I am not a pessimist, I am a very optimistic person, I love to be able to bring good energy to a work environment. 5) I am not a complainer, I know that everyone around me is working so hard and if I do good work and get my work done it helps everybody.

What is cartesian product?

A cartesian product occurs when you have a join between two or more tables and the join condition is not well specified or spelled out, or it is wrongly specified or spelled out The result leads to all records in Table A to be joined with all records in Table B And that is not desirable, because lets say you have 1 million records in Table A and 1 million in Table B, that means you will have a 1 million x 1 million transaction that is just creating more records than are necessary DBAs also don't like this because it affects our storage when unnecessary data is generated The cartesian product is usually a mistake made using the join condition, However, oracle recently created the ability to use a cross join to create a cartesian product

Difference between Cold and Hot backup?

A cold backup, which can also be called an offline backup, is done when there is no user activity going on with the system. It is taken when the database is not running and no users are logged in. All files of the database are copied and no changes are made during the copy. A hot backup, which can also be called an online backup or inconsistent backup, is taken when the database is running. All files of the database are copied and there may be changes to the database during the copy. The benefit of taking a hot backup is that the database is available for use while the backup is occurring. The benefit of taking a cold backup is that it is typically easier to administer the backup and recovery process. For cold backups, the database does not require being in archive log mode and thus there will be a slight performance gain as the database is not cutting archive logs to disk. In no archive log mode, you can only do a cold backup with RMAN, when the database is in archive log mode it can do both hot and cold backups.

How does a deadlock situation occur in the system? What is a deadlock situation?

A deadlock occurs when a user runs a query against record 1, SELECT, UPDATE, INSERT, DELETE, and oracle locks another user from running queries on that record. So one query acquires a lock on an object (rows, data pages, extent, tables, etc) and another resource tries to access it at the same time. Another way to explain it is two or more transactions are waiting for one another to give up locks. One example could be, Transaction A might hold a lock on some rows in the Accounts table and needs to update some rows in the Orders table to finish. And Transaction B holds locks on those very rows in the Orders table but needs to update the rows in the Accounts table held by Transaction A. Transaction A cannot complete its transaction because of the lock on the Orders table and, Transaction B cannot complete its transaction because of the lock on the Accounts table. All activity comes to a halt and remains at a standstill forever unless Oracle detects the deadlock and aborts one of the transactions. Oracle automatically detects and resolves deadlocks by rolling back the statement associated with the transaction that detects the deadlock.

What is the difference between a full RMAN Backup and a level 0 RMAN backup?

A full RMAN Backup, backs up all blocks of an object. For example, All blocks of a datafile can be backed up. A level 0 backup is a full backup that begins the incremental backup strategy. It is where you base your incremental backups from. The difference in incremental vs. differential backup is that, while an incremental backup only includes the data that has changed since the previous backup, a differential backup contains all of the data that has changed since the last full backup.

Name 3 background processes and what they do?

ARCn - the Archiver Process, Copies the redo log files to archival storage when they are full or an online redo log switch occurs LGWR - the Log Writer Process, Writes redo entries to the online redo log, Redo log entries are generated in the redo log buffer of the system global area (SGA). LGWR writes the redo log entries sequentially into a redo log file. If the database has a multiplexed redo log, then LGWR writes the redo log entries to a group of redo log files. PMON - the Process Monitor, Monitors the other background processes and performs process recovery when a server or dispatcher process terminates abnormally SMON - the System Monitor Process, Performs critical tasks such as instance recovery and dead transaction recovery, and maintenance tasks such as temporary space reclamation, data dictionary cleanup, and undo tablespace management DBWn - the Database Writer Process is an Oracle background process created when you start a database instance. The Database Writer Process (DBWn) is a background process that writes data in the database buffer cache to the data files.

How would you check the size of a diskgroup?

ASMCMD, Oracle Automatic Storage Management Command-Line Utility Then use the lsdg command.

Difference between delete and truncate?

Both commands remove records from tables, however with the delete command you have the opportunity to rollback back if you have not committed. And also we have the opportunity to use a where clause where we can pinpoint a particular record to delete, limiting the scope of the delete command, but whether it's a single record or all records on the table, we have the ability to rollback if we have not committed. The truncate command completely eliminates all records on the table without the ability to rollback. To elaborate on that a little more, when you issue the delete command, the records that are being deleted will be immediately moved to the undo segment of the undo tablespace for the purpose of rollback. So if we issue a rollback, Oracle will go to the undo segment and fetch those records and put them back. However if we were to commit there would be no ability to rollback, So The delete statement will generate undo data for the ability to rollback. The truncate statement does not generate undo data so there is no ability to rollback.

How would you know the sizes of the disk groups in ASM?

By Querying v$asm_diskgroup OR by using asmcmd, the ASM Command-line, then the command lsdg You can also use ASMCA, ASM Configuration Assistant *** SQL statement to check how much disk space is allocated/ in-use by an ASM instance *** SELECT name, free_mb, total_mb, round(free_mb/total_mb*100,2) "Free%" FROM v$asm_diskgroup;

A new project is starting up and they need someone to help them define their table structure. What would you do to help them with that? It's a brand new design.

First I would meetup with the team and have a meeting on the project and what their needs are. We would come up with a model after a couple of meetings, specifically A ER Model an Entity Relationship Model, creating a diagram from business/client specifications or narratives. From there I would be able to understand what the primary keys, foreign keys, constraints, and data types will be for the table or tables that need to be created. I would see if there are any tables already in existence that have a similar table structure to the table or tables I want to create, and I would describe those tables or use SQL Developer to Export the data definition language DDL/queries to see the data types and constraints of the tables. Then once everything is looking good and everything has been approved I would create the table or tables by running DDL queries. I would then continue to work with the team to fine-tune everything as the project is moving along. ** Short answer** Yes, I have done that before. I can create a design of table structures and work with the team to fine-tune the design as the project is moving along.

Where would I get the SCN?

From querying the v$database Performance View

What is the command tool for Golden Gate?

GGSCI, Oracle Golden Gate Software Command Interface

What are Huge Pages and how are they used?

HugePages is a feature integrated into the Linux kernel 2.6. Enabling HugePages makes it possible for the operating system to support memory pages greater than the default which is usually 4 KB. As the name suggests, they help in managing huge size pages in memory in addition to standard 4KB page size. Using huge pages means you will need fewer pages. You can define as huge as 1GB per page using huge pages. During system boot, you reserve your memory portion with huge pages for your application. The memory occupied by huge pages is never swapped out of memory. It will stick there until you change your configuration. This increases application performance to a great extent. Especially when it comes to the Oracle databases, which have pretty large memory requirements.

How would you track a long-running query?

I could query v$session_longops OR I could use a AWR Report

When companies ask about your experience about MySQL, Postgres, MongoDB, MS SQL Server databases?

I have minimal to some exposure/experience with (particular database) however I am a quick learner and I believe I'll be able to catch up to speed on it if necessary. I am an Oracle DBA with over x years of experience with Oracle Database Administration, so I'm primarily an Oracle DBA but I should be able to catch up to speed on it. **MySQL in particular** And I believe MySQL is also owned by Oracle so it should be no problem. **MS SQL Server in particular** In my company sometimes I run backups for the MS SQL Server DBAs when they are on vacation or on the weekends when they are not available. So I have some exposure to SQL Server so I believe I should be able to catch up to speed on it if needed, but I'm primarily an Oracle DBA.

Tell me about your PL/SQL experience?

I run PL/SQL Scripts to help monitor databases and improve performance, as well as do sundry procedures to monitor performance and impact on operations.

What are some of the tools you use as a DBA on a daily basis or in general?

I use multiple oracle tools to help manage and monitor my databases including: RMAN for backups and recovery of database objects OEM Cloud Control - for monitoring and general administration Data Pump for moving oracle objects from one database to another SQL Loader for loading text files into oracle tables SQLPLUS - for command-line access TOAD/SQL Developer as an integrated development tool DBUA - for upgrades DBCA - database installation ASMCA and ASMCMD - for ASM administration And many more working with 11g/12c/19c databases including pluggable databases

How would you configure a database to use OEM CLOUD CONTROL?

I will install an agent on the host server or I can use OEM13c to discover it and push the agent to the server

What do you do when shutdown immediate on the database does not happen immediately?

I will open a new terminal session to view the alert log as well as review OS level processes that are currently running that could be the bottleneck. From my experience, the shutdown eventually happens but I will continue to monitor the shutdown for reassurance.

You get a call from a user that a report or program that usually runs for 10mins is now taking 1 hour. What are the actions you will take to resolve the issue? Users are complaining that the system is running very slow, what do you do? A user complains that a query that usually runs for 10mins is now taking 1 hour. What are the actions you will take to resolve the issue?

I would ensure that this is not a one-time issue. Maybe there are network issues or hardware issues with the server where the data or application is being stored. Check alert log in case any major database process like a backup was taking place at that time. Whether it's a query or report or program, it boils down to a long-running SQL statement, so I will track down the SQL-ID by querying v$session, v$sql, and v$longops I can also use the EXPLAIN PLAN to determine the Execution path of the SQL statement. I will lookout for a full table scan and other costs by the statement. Perhaps, I may need to create an index to facilitate performance. SQL Tuning Advisor can also be used to evaluate the statement and make recommendations. I can also look at my AWR Report for what the bottlenecks were All these steps can be done via command-line as well Oracle Enterprise Manager Cloud Control 13c ***EXTRA*** How would you know what kind of index to create? depends on the EXPLAIN PLAN, and I could check my AWR Report too

What are you looking for more of as a DBA?

I would say Larger projects and the ability to work with multiple teams and expand my skills as a DBA. ***looking for growth*** Presently I am looking for an opportunity where I can handle more challenges and grow my skill set. This position with (Company Name) is one I am excited about because of the large client base and company size that could expand my technology and management skills. ***vague short answer*** I am looking for a change of environment and more remote opportunities.

What happens when the database server experiences a power failure?

Ideally, we should not be having a power failure of the database server if it is in a data center because data centers usually have other redundancy methods built in to avoid power failure But if it comes to that and the server goes down abruptly, there is really no harm done to the database as such However, the database has not had the opportunity to write to the datafiles whatever was in its database buffer cache (AKA the DBWn Process) Also, whatever was uncommitted, has not been given the opportunity to rollback, so we have uncommitted data in our undo segment of the undo tablespace So when this database went down abruptly, that was the equivalent to shutdown abort So now we have an inconsistent database When the power is restored, the SMON background process will immediately do a quick check and discover that there are discrepancies between the last checkpoint that was registered in our controlfile and the actual contents of our datafiles So SMON will immediately initiate recovery, and part of the recovery will be to rollback everything that was not committed from the undo segment And also SMON goes to the redo logs and gets every transaction that was supposed to be in the datafiles that the database did not have the opportunity to write into the datafiles, it will get it and apply it to the datafiles and our database will recover automatically

Difference between Inner and outer join?

In SQL when you are extracting data from multiple tables you will need to join these tables. And you join them using a certain condition, for example, field 1 on table A is equal to field 2 on table B, so there will be some type of join condition. In an inner join, the result that you get strictly meets the requirements of the join. So if you have an employees table that you are joining to a departments table on the join condition of the field department id, it means that the result will only include employees who have a department id that matches with a department id on the department table. But an outer join will display the results that meet the requirements of the join, and depending on if you do left, right, or full outer join, it will also display records that don't strictly meet the requirements of the join but that need to be displayed. For example, let's say you have the employees table as the left-hand table and you wanted to display employees that don't have a matching department id on the right-hand table departments, then that would be a left outer join. We can also have a right outer join, where there are departments that have department IDs but that don't have employees. And of course, we can have a full outer join that shows the results of both the left and right outer joins. So an outer join shows data that doesn't strictly match the conditions of the join as well as the ones that do match the conditions.

What is a rolling patch?

In a single instance database, if we apply a patch, our database will be down for however long it takes for the patch to apply. In a RAC environment, let's say a 3 node RAC, Once you start the process to patch the Oracle Instance in node 1, the OPatch Utility will notice your other 2 nodes and ask if you want to apply the patch in a rolling fashion and you say yes The patch will be applied in node 1 and then move to the next node until all nodes have been patched. So that 2 nodes are constantly always running and the database is never down.

What is something that you are really proud of? Something that you have had great success with?

In my career, I would say that I am really proud of the first time I set up data guard in a 3-node RAC. Getting to that point as DBA was something that I was really proud of.

Tell me a time where you failed or made a mistake? AKA tell a story.

In this role as a DBA, it is never routine. Sometimes different projects and tasks present different challenges, but over time you learn to overcome them. Once earlier in my career, I mistakenly dropped a table in a UAT(test) environment. It was a testing environment so the damage was not very severe, but it was not approved. Quickly I immediately informed my Manager and included my action plan to recover, and quickly I recovered it by flashing back the dropped table. ***Explaining more, YOU DON'T HAVE TO SAY*** Oracle Flashback enables you to reverse the effects of dropping (deleting) a table, returning the dropped table to the database along with dependent objects such as indexes and triggers. This feature stores dropped objects in a recycle bin, from which they can be retrieved until the recycle bin is purged, either on purpose or because space is needed. ***DON'T HAVE TO SAY*** I had to learn to adapt and move quickly and this helped me learn how to work under pressure as a DBA, and more importantly how to not get myself into these situations by being proactive, not reactive. My mistake delayed a job that needed to run and I took some heat for it, but everything worked out. Luckily for me, that was early in my career.

How can you improve the performance of Data pump?

Increase the parallelism and disable logging if it is not crucial

Difference between Obsolete and Expired backup?

It is very important and crucial that we take backups in a database environment Because Our ability to recover from a disaster, and or a loss of any of the components for our database depends a lot on the quality of our backup And RMAN is the most used utility to take a backup in Oracle Of course, everyone has to determine the length of time they may need to retain their backups for recoverability, and that is usually done through a retention policy using RMAN. Lets say I decide to set a redundant number for my retention policy, like I need 4 backups in my FRA at all times The fifth backup, when it is taken, will make the first backup become obsolete Or I decide that a week old backup should be enough for me to recover in case of disaster, then any backup that is older than a week becomes obsolete So an obsolete backup is a backup that is outside of my retention policy And RMAN keeps that metadata of the recovery window in the controlfile, so it can determine what backup is obsolete or not By default, RMAN should overwrite any obsolete backups when it needs space, but we can clean out those backups using report obsolete backup, then delete obsolete backup as part of our backup and restore script An expired backup is a backup that may no longer exist on the disk Perhaps some backup to tape system has already backed it up and deleted it, or somebody has just deleted it and it is no longer there, but in the controlfile of the database RMAN still has an entry that says the backup is there So when I crosscheck the backup, RMAN will double check what is on the controlfile to what is on the disk in my FRA, and it will mark the backup as expired An expired backup does not exist anymore, but the metadata in the controlfile may say it exists, so we do a crosscheck to make sure it is marked as expired so we can go ahead and run the command, delete expired backup.

How would you handle your FRA being full?

It will be reported in my Alert log So I will review my backup retention policy and delete obsolete and expired backups Or increase the size of my FRA by altering db_recovery_file_dest_size

Let's say we need to take a hot backup of the database and RMAN is not available, Explain the steps needed to get a backup of the database?

It would be very unusual for RMAN to not be available. RMAN is an integral part of Oracle so if RMAN wasn't available it would be a problem. We can do a schema backup, which is one form of a logical backup. Or we can use any OS utility to take a backup like a snapshot using VMware. For a logical backup, we can use a tool like data pump. ***Is there anything you need to do at the database level before doing any of those things? AKA what steps at the database level are needed for a schema backup using data pump?*** Well, most OS utilities like snapshot are not visible to the database so there would be nothing we need to do there at the database level. For a schema backup using data pump, I would need to make sure to have the right privileges at the database level. If you have SYSDBA rights or you can log in directly with the system user, then take the export of the database object and it should work. If you don't have SYSDBA privileges then, Step 1: Check the privileges of the User you want to give new privileges to and give DBA privileges to specified user like CREATE SESSION, ALTER SESSION, CREATE VIEW, EXECUTE ON "SYS"."DBMS_STATS" to name a few. Step 2: Create a directory that is recognized by Oracle anywhere in the system by creating a directory object, and grant read and write privileges on the directory to the user. This second step is to create a directory where the user can store the export of the object. Step 3: Grant DATAPUMP_EXP_FULL_DATABASE role to the User. Apart from granting read and write privileges on the directory to the user we also need to grant DATAPUMP_EXP_FULL_DATABASE role to the user who wants to perform the export. Step 4 : Export the schema, User can use the Expdp command to export the database schema data. There are ways to come to the same result of exporting a schema from a database that can be more secure, but this is just one way.

What is AWR automatic workload repository?

It's a location that oracle stores metrics hourly and it is the basis for AWR report.

What is DataGuard? Give a brief description of what DataGuard is?

Let's say a business's data center and all of its data was burned down in a fire, how would that business be able to tell its customers and investors that all of the company's data has been lost. Nowadays that can't happen because we live in a 24/7 business environment. So this causes companies to evaluate what kind of technologies they have to ensure continuity of business no matter what happens, also known as a disaster recovery plan. Oracle Active Data Guard is Oracle's #1 Technology for this particular situation. You have a Primary Database in Atlanta, and a Standby Database in New York, and they are kept in sync with one another. And if anything happens to the Primary in Atlanta we can switch over or fail over to the Standby database in New York, and end-users won't even know what happened. In order to use Data Guard, you must make sure that the server in Atlanta and the Server in New York are running the same OS, and both databases are on the same software Oracle, otherwise, you can't do Data Guard Now If you needed to acquire data from another type of database like SQL Server or MongoDB, Oracle Golden Gate would be the tool you would use because it is a heterogeneous platform that can access multiple different databases

What is snapshot too old? What is the Oracle error code ORA-01555?

Let's say we made some changes to our database, Let's say we updated the field salary on a record from 4000 to 25000, the old value of 4000 is immediately recorded in the undo segment of the undo tablespace The reason for that is that if I were to rollback oracle knows where to go and get that value to put it back But once we commit, that data becomes inactive in my undo tablespace and will be retained based on my undo retention policy, which by default is 15 mins, so after 15 mins, if that policy is not changed, that data will not only be expired but oracle can now overwrite it So now if a long-running query that started at 9 am begins to run and it's running for 3 hours, changes that are made in the database between 9 and 12 o'clock will not show up in my result because the query when it encounters the changes that took place at 10, 11, 12 oclock, will avoid picking them up but go to the undo tablespace and pick up the value as at 9 o'clock If the values are there in the undo tablespace and not overwritten we are good, But if for any reason the values are no longer there, because maybe the undo tablespace was too small, or my retention policy was too low, and those values were overwritten because oracle was pressed for space, the query will fail with the error snapshot too old Solutions can be to increase the size of the undo tablespace, or increase the undo retention policy And The good news is that Oracle now has what is called Undo advisor, that can advise me on what are the best settings for my specific database so that I don't have this kind of error

Where can you find the configuration of RMAN? How do you list the current RMAN configuration?

Login to RMAN and run the command, show all;

Name at least 2 background processes relating to dataguard?

MRP Process (The Managed Recovery Process) - A background process called MRP will apply the redo data that has arrived in the standby redo log which then goes to the database in real-time or the archive log. The LNS process (The Log Writer Network Server process) - The LNS process is a background process that periodically checks for new redo entries from the redo log buffer or log writer (LGWR) process. LNS process of primary database captures redo from redo log buffer and then ships it to RFS process of standby database through oracle net. If the LNS process is not fast enough to capture redo information before it goes to online redo log files or if redo data is going to online redo log files very quickly then the LNS process will read from the Online redo log files and send redo to the RFS process through Oracle net. RFS process (The Remote File Server process) - The RFS process writes redo data to standby redo log files of the Standby database LSP [Logical Standby Process]: This Process comes into play for logical standby databases only. It controls the application of archived redo log information to the logical standby database. The LSP process will transform redo data into SQL statements and then these SQL statements will be applied to the logical standby database.

What are some Interview Questions you should/could ask?

Never say you don't have any questions ask some of these below: What is the next step in the hiring process? Can you give me an overview of the DBA team? What made you decide to work for this company? What do the day-to-day responsibilities of the role look like? What does success look like in this position, and how do you measure it?

How would you know that an agent is no longer sending information to OEM?

OEM can be setup to notify me when a agent stops sending information to OEM by email or a UNIX script can be run on each host to notify me as well

What are the components of Grid Control(OEM Cloud Control 13c)?

OMS Agent Repository Database WebLogic Server Software Library ( were all the patches and plug-ins are kept)

Describe the 3 protection modes of DataGuard?

One of the last things we want as DBAs concerning our Primary and Standby databases, is that we become unaware of when logs that were supposed to ship for some reason all of a sudden stop. And now our Primary and Standby are out of sync. To ensure that logs from the primary have gotten to the standby database, there are 3 protection modes Maximum Performance = Oracle will assume that the data has gotten to the Standby and continues on with no interruptions in the Primary database. This is typical for test environments. Maximum Availability = Oracle will check if the redo data has arrived and been applied to the Standby and alert you if it has not. However, your Primary database will continue to function, and meanwhile, your standby may be out of sync Maximum Protection = Oracle will not continue in the primary until it ensures that the standby database has applied the redo data from the primary and the two databases are in sync.

What is the default number of redo logs?

Oracle requires at least two groups of one redo log. If you are archiving, three are suggested.

What is PCTFREE and PCTUSED?

PCTFREE is a parameter used to find how much space should be left in a database block for future updates. This means that if the PCTFREE = 20, new rows will be added in the block until it is 80% full. PCTUSED is a parameter that helps Oracle know when it should consider a database block to be empty enough to be available for use/added to the free list. Oracle will only insert new rows in blocks that are enqueued on the free list. This means that if PCTUSED = 40, Oracle will not add new rows to the block unless sufficient rows are deleted from the block so that it falls below 40% used. ***EXTRA*** PCTINCREASE parameter is used to find how much will each subsequent segment grow. This value is in %.

Describe the 3 types of standby databases, What is the difference between Physical and Logical Standby?

PHYSICAL STANDBY- The standby database is in mount mode and it is just receiving redo data and applying that data to the database. SNAPSHOT STANDBY- In a Physical standby database the database is in mount mode, but a Snapshot Standby database is open for read and write. The redo data is coming from the Primary but is not applied to the Standby. When you are ready to switch back to a physical standby, The redo data received from the primary database is applied. All local updates to the snapshot standby database are discarded LOGICAL STANDBY- In a Physical standby database, redo data is applied to the Standby from the Primary without the need for the underlying SQL Statement that generated that data. In a Logical Standby, once some redo data has been applied to a physical standby, you can convert it to a Logical Standby and all redo data that is shipped to that logical standby will be reversed engineered and show how that data came about from the primary in the form of a SQL statement, this is least used standby database.

What are the basics of a PL/SQL script?

PL/SQL is a block-structured language, meaning that PL/SQL programs/scrpits are divided and written in logical blocks of code. Each block consists of three sub-parts. A Declarative part, an Executable Commands part, and an Exception Handling part. The Declarations section starts with the keyword DECLARE. It is an optional section and defines all variables, cursors, subprograms, and other elements to be used in the program. The Executable Commands section is enclosed between the keywords BEGIN and END and it is a mandatory section. It consists of the executable PL/SQL statements of the program. It should have at least one executable line of code, which may be just a NULL command to indicate that nothing should be executed. The Exception Handling section starts with the keyword EXCEPTION. This optional section contains exception(s) that handle errors in the program. Every PL/SQL statement ends with a semicolon (;) and PL/SQL blocks can be nested within other PL/SQL blocks using BEGIN and END. So to summarize: Define variables and other elements in the Declarations section Add executable lines of code in the Executable Commands section between BEGIN and END of the block And add an EXCEPTION if necessary. The Following is the basic structure of a PL/SQL block − DECLARE <declarations section> BEGIN <executable command(s)> EXCEPTION <exception handling> END:

What are the 4 different ASM redundancy options and what do they do? - Normal, High, External, Flex

Part of what makes ASM tick is that data on one disk is distributed across multiple disks, so if you lost one disk to corruption, it won't result in a loss of the data itself because the data has been mirrored onto different disks. So Oracle says if you want to do normal redundancy you have to have this certain number of disks in a disk group or failure group, same thing for high redundancy. But if you are going to be using an external technology that is not Oracle, then just use external redundancy. The redundancy of a flex disk group is FLEX REDUNDANCY, where each file group in the flex disk group has its own redundancy. ***EXTRA*** NORMAL REDUNDANCY - Two-way mirroring, requiring two failure groups. HIGH REDUNDANCY - Three-way mirroring, requiring three failure groups. EXTERNAL REDUNDANCY - No mirroring for disks that are already protected using hardware mirroring or RAID. If you have hardware RAID it should be used in preference to ASM redundancy, so this will be the standard option for most installations. FLEX REDUNDANCY - A flex disk group requires a minimum of three failure groups and tolerates failure like NORMAL redundancy. Each file group in the flex disk group has its own redundancy.

Give an example of working with your team? Give an example of how you made an impact at your current workplace?

Prior to joining my current team. Daily database tasks were scheduled using UNIX crontab on each physical server making it difficult to manage and monitor outcomes. I worked with the team to move the job scheduling to a centralized tool - Oracle Enterprise Manager (OEM 13c) where it was easier to manage, monitor, and receive email notifications of outcomes.

Describe the 3 types of network-IPs in RAC?

Public IP: The public IP address is for the server. This is the same as any server IP address, a unique address with exists in /etc/hosts. Private IP: Oracle RAC requires a private IP address to manage the CRS disk group, the Clusterware heartbeat process, and the cache fusion layer. Virtual IP: Oracle uses a Virtual IP (VIP) for database access. The VIP must be on the same subnet as the public IP address. The VIP is used for RAC failover (TAF).

Explain three memory structures? Explain three memory structures found within the SGA and what purpose they serve?

Redo log Buffer - This section of the SGA holds the changes/Redo entries that contain the information necessary to reconstruct or redo, changes made to the database. These redo entries are then written every 3 seconds, or when 1/3 of the redo log buffer is full, by the log writer to Redo log files on Disk. Database Buffer Cache - This section of the SGA stores copies of data blocks read from data files, that are currently being used or have recently been used. Large Pool - This section of the SGA is optional. It helps provide large memory allocations for the database and helps facilitate backups and recovery operations of the database as well. SGA - (System Global Area) is a group of shared memory structures, known as SGA components, that contain data and control information for one Oracle Database instance. All server and background processes share the SGA. If multiple users are concurrently connected to the same instance, then the data in the instance's SGA is shared among the users. Consequently, the SGA is sometimes referred to as the shared global area. Examples of data stored in the SGA include cached data blocks and shared SQL areas. PGA - (Program or Process Global Area) is a nonshared memory region that contains data and control information exclusively for use by an Oracle process. Oracle Database creates the PGA when an Oracle process starts. One PGA exists for each server process and background process. The collection of individual PGAs is the total instance PGA, or instance PGA. Database initialization parameters set the size of the instance PGA, not individual PGAs.

What are the options for RMAN retention policy? -- Redundancy and Recovery Window

Redundancy stands for the number of copies that must be available. Once you have the chosen number of backup copies, any older backups are marked as obsolete. This does not put any sort of restriction on how long the backups can be kept. This kind of setting works best for environments where, depending on the SLA (service level agreement), some specified number of backup copies must be kept. By default, this is the setting on which RMAN works, using only the most current backup copy. Depending on the requirement, the number of copies can be changed Recovery Window, is not based on the number of backup copies, but on the number of days for which you want to keep the backup. This does not put any condition on the number of backup copies that can be retained; you can have X number of copies within the specified period of the recovery window. Any backup that is older than the recovery window is marked as obsolete.

What is the difference between restore and recovery?

Restore is from a backup and a recovery is from archived logs and redo logs.

What would you do to help improve the skills of Mid and junior-level DBAs? Have you ever led a group of DBAs before?

Sure, I have led a team of 5-DBAs at my company Helping mentor them and explaining in detail to them the current tasks at hand. For example, what procedures and precautions we take at our company when we are upgrading databases or migrating a certain object from a dev environment to our production. I also help them handle and solve whatever issues they escalated to me and answer any of their questions.

Are you familiar with TTS Transportable TableSpace? What is TTS Transportable TableSpace?

TTS is used for the migration and upgrade of a database to a new server

(Same question as #15) How would know that redo logs are not being applied to the standby database? How would you investigate and resolve it?

The Alert log of the primary will indicate it and I will run a SQL script on V$ARCHIVED_LOG to obtain the sequence# and the applied status. Oracle Enterprise Manager Cloud Control can also notify me as well. **How would you investigate and resolve it? There could be many things to check, especially the network connection, as well as the Log Network Server process (LNS Process) and the MRP Process (Managed Recovery Process) in the Standby. Usually, once the network issue is resolved the archivelog or logs will be sent to the standby automatically. However, let's say in this scenario, the archivelogs are still missing in the standby database, and I have to deploy the fastest solution to get my standby in sync. I would: 1) Get the SCN Number of the standby database 2) Take an incremental backup of the primary from the SCN Number obtained from standby 3) Cancel the managed recovery process in the standby 4) Apply the Backup to the standby, making sure to catalog it before recovering the standby database 5) Then Resume the managed recovery process in the standby

What is the difference between a PDB and a CDB?

The CDB is the root container while the PDB is the pluggable database. All PDBs are plugged into a single CDB root container. The number of PDBs that can be held in a CDB is 253 for Enterprise Edition and 4096 for Exadata Cloud.

Where does RMAN keep its metadata?

The Control File OR The Recovery Catalog

What is the default listener in RAC?

The GRID listener

What utility is used for patching nodes/databases/instances?

The Opatch utility

What is the Dataguard Broker?

The Oracle Data Guard broker logically groups primary and standby databases into a configuration that allows the broker to manage and monitor them together as an integrated unit. You can manage a broker configuration using either Oracle Enterprise Manager Cloud Control (Cloud Control) or the Oracle Data Guard Broker command-line interface - DGMGRL.

Where is a AWR report stored?

The Sysaux tablespace.

How would you read the content of a trace file?

The Tkprof utility - The TKPROF program converts Oracle trace files into a more readable form. If you have a problem query you can use TKPROF to get more information.

A project manager comes to you with a SQL script/object (tables, data, schemas, backups, etc.) that needs to be moved into production right away, there is a problem that needs to be fixed. How would you handle that?

There are a lot of different tools that we can use to migrate things from 1 environment to the other. But first I will follow the chain management process, whatever that may be. Project management, agile, or scrum Because there has to be appropriate approvals from tickets in place making sure that all the planning, testing, and preparations are done and approved before moving forward. Then I can go ahead and deploy whatever object or script is needed in production. Because as DBAs we know our production database is very crucial to the overall running and continuity of business. Once all of that is done I would use a tool like data pump or secure copy (scp) in a Linux environment to move what needs to be moved.

How do you investigate the error ORA-600?

This is a generic internal error that could ultimately result in a support ticket with Oracle. Causes of this message include: timeouts, file corruption, failed data checks in memory, hardware, memory, or I/O errors, or incorrectly restored files. However, I will start by checking my alert logs for any issues, And then use Oracle trace file Analyzer to identify the root cause. If the issue still persists, I will create a Service Request ticket with Oracle.

How long would it take you to backup a 30TB or more database (aka a Large Database)?

Well I've worked with Databases ranging from 1TB to 30TB on premise as well as in the cloud. The answer would depend on the company's infrastructure. If the data is mounted on another hard drive away from the server and you are taking a backup across the network, the bandwidth on the network could cause delays, the backup could clog the network and make everything slow depending on how good the network is. If the server/box is not up to date and/or the disk is old, it could also cause the backup to drag because of the hardware the database sits on, if we are using the lasted SSDs on Exadata with Oracle then our backups should run more efficiently no matter how large it is, so the answer really depends in the company's infrastructure.

Difference between varchar and char?

Well char and varchar are data types associated with fields in an oracle table Both of them are character data types However, char is a fixed-length character Meaning that if you have a field in a table defined as char(5) and you insert the name Abby in that field, oracle will pad in an extra space to make sure 5 spaces are filled in that field Varchar means variable character so if you insert the name Abby in the varchar field, oracle will not pad in an extra space it will adapt to the number of characters you typed into that certain field, it can handle variable length Therefore it will save us space Ideally, we should be using varchar, but char is best suited for fields we know that will never change in length For example, zip codes or state abbreviations like NY for New York and GA for Georgia in the United States, that never changes so we can put them as char.

What are the benefits of pluggable databases?

Well the advantages of pluggable databases can be quite substantial. One advantage is each pluggable database in the root container is aware of each other and can use the same resources. Where as in the original architecture, each database on a server is not aware of each other and are competing for resources on the server like processing power, hard drive space, and resources for each of their individual background processes.(PMON,SMON,LGWR,etc.) Oracle solved this problem with pluggable databases by having one database that is the root container used to contain other databases plugged into that main root container. So in your Oratab and on your server it only knows one database and that is the root container. So you could have 10 pluggable databases plugged into your root container but only one of each of the background processes are running for all of those databases. Every database won't have its own set of background processes. So you save on memory processing power substantially. Another advantage of pluggable databases is that you save a ton of space as well because all of your pluggable databases will share the same Control file, archive logs, redo logs, undo tablespaces, flashback logs, OEM, and much more in the container database. Each pluggable will have its own system tablespace, sysaux tablespace, and datafiles, and you can select if you want each pluggable to have its own undo tablespace or use the container's undo tablespace which is a huge advantage. ***SHORT ANSWER*** The advantages of pluggable databases sum up to... 1) Less competition for memory resources from database background processes 2) And more disk space is saved because pluggable databases will share the same Control file, archive logs, redo logs, undo tablespaces, flashback logs, OEM, etc, in the container database.

What is your typical day at the office as a DBA?

Well this work as a DBA isn't always routine, but I would say, - I attend status meetings - I attend project meetings - I handle any tickets or issues that may come up - And I perform other user tasks and administrative tasks like creating users, and maintaining database objects and storage structures.

Somebody dropped a table, how do you recover from it step by step? How do you recover a dropped table? Let's say the recycling bin has been purged.

Well, this job is not routine, every day presents its own challenges. I remember one of our new hires dropped a critical table by mistake and the issue was escalated to me. I immediately informed my Manager and included my action plan to recover and My manager notified users of a brief outage period. In Oracle 11g, RMAN is not able to recover a table from backup, although this is possible from 12c, this particular database was 11g. The table was already purged from the recycle bin, so we couldn't flashback the table. So I restored the database from backup to another server, making sure to include the necessary tablespaces that contain the table. Then using datapump, I imported the table from the auxiliary database to Production There was a brief outage that affected business. However, if we had Dataguard that particular database outage would have been minimal.

What is the purpose of a ASMLib?

When preparing a disk to be a part of ASM diskgroup, ASMLib provides the oracle file system definition for the raw ASM disk.

How would you set a trace or trace file?

With every SQL statement, you can run what is called an explain plan to see the execution path/route your query will take to generate your desired result/data, (Ex: You explain the driving directions for a road trip before you even start) This allows you to evaluate any issues or bottlenecks in your queries, as well as see if the route the SQL statement will take will be too costly, which could cause our queries to run longer than need be With a trace file, we can track our queries as they are running in real-time So when we take a look at our trace files, and we can see the exact execution plan our queries are taking as they are running A trace can be set for so many reasons, and as DBAs we know that Oracle generates trace files for our databases constantly In order to set a trace, first we must turn the tracing event ON with the proper level to give us the appropriate amount of information that we want, 2,4,8,12 (SQL> ALTER SESSION SET sql_trace=TRUE;) (SQL> ALTER SESSION SET EVENTS '10046 trace name context forever, level 12';) Give a proper name/number for the trace file (SQL> alter session set tracefile_identifier = 'test_plan1';) Execute the SQL query that you want to trace, and let the trace file track it for you Then you can turn off the trace event once you are done. (SQL> alter session set events '10046 trace name context off';) (SQL> ALTER SESSION SET sql_trace=FALSE;) ***EXTRA YOU DON'T HAVE TO SAY This is the SQL for finding the directory where the trace files are generated (this is the directory where your parameter file "init.ora" is) (SQL> SHOW PARAMETER user_dump_dest;) Usually, database application developers do not have the required/necessary permissions/privileges to do all these steps. However, most of the steps could be done by an application developer. A few steps will need a DBA privilege.

How can you monitor a Data pump job?

You can monitor a Data pump job using Oracle Enterprise Manager Cloud Control (OEM) Or you can Cat/Read the log of the data pump job as it is being executed Or you can look at the view DBA_DATAPUMP_JOBS, which identifies all active Data Pump jobs in the database, regardless of their state, on an instance (or on all instances for Real Application Clusters). It also shows all Data Pump master tables not currently associated with an active job.

How can you monitor SGA (System Global Area) usage?

You can monitor your SGA usage by querying the performance view v$sgastat. (SQL> select round(used.bytes /1024/1024 ,2) used_mb , round(free.bytes /1024/1024 ,2) free_mb , round(tot.bytes /1024/1024 ,2) total_mb from (select sum(bytes) bytes from v$sgastat where name != 'free memory') used , (select sum(bytes) bytes from v$sgastat where name = 'free memory') free , (select sum(bytes) bytes from v$sgastat) tot ;)

How do you stop sending redo logs to the standby?

You kill the Log Network Server process (LNS Process) on the primary. OR You make a temporary change to the tnsnames for the standby so the primary is not able to deliver the logs

What versions of oracle have you used?

11g,12c,19c

How would you clone a database/explain the process?

1: Create a password file on the destination server 2: Establish connectivity between the target and destination server (tnsnames.ora, sqlnet.ora) 3: Create the directories for the database files 4: Take the RMAN backup from the target server and copy it to the destination server. 5: Startup the destination database in nomount mode 6: Run the RMAN duplicate database command

How would you resolve gaps in log shipment?

A GAP is a range of missing archived redo logs, that is created whenever the standby is unable to receive the next archived redo log generated by the primary database. For example, a GAP can occur when the network becomes unavailable and automatic archiving from the primary database to the standby database stops, but the primary continues to generate archived redo logs. When the network is available again, the automatic transmission of the redo data from the primary database to the now out-of-sync standby database resumes, however, all logs created by the primary during the outage are not shipped. So lets say we have a scenario where log 15 has arrived, log 16 has arrived, but logs 17,18,19,20 have not arrived and suddenly we see that log 21 has arrived, there is a gap The alert log of the primary database will let you know that the standby is no longer receiving logs, And I will have scripts running that are monitoring log shipment and will notify me if there has been any issues with log shipment. I also have OEM notifications that will notify me as well Because without logs 17,18,19,20, you won't be able to apply log 21 So the question is how do we resolve this? If the logs are available in the primary, we will copy those logs to the standby, catalog them so oracle is aware of the files in RMAN, and then apply them to the database so that log 21 can be applied But lets say we have a huge gap. The standby stopped receiving logs at 16 and the next log that was received was 2000 and now you have to take time copying them 1 by 1. Or the logs are no longer available/deleted in the primary, and you have no backup of your primary. To deploy the fastest solution to get my standby in sync 1. I will Get the SCN Number of the standby database, because the standby is now static the SCN will not change, meanwhile the primary has carried on 2. I will Take an incremental backup of the primary from the SCN Number obtained from standby. You could take a incremental backup from the SCN Number of the standby to the most recent SCN in the primary or you can take a incremental backup from the SCN in the standby to the SCN number at the time of the log 2000, filling the gap and removing the need to backup the entire database 3. I will then Copy the backup to the standby, and catalog it so that RMAN is aware of it 4. Then I will Cancel the Managed recovery process (The MRP process) in the standby 5. Apply the Backup to the standby by restoring the backup 6. And then Resume the Managed recovery process in the standby The gap will be filled and the standby will apply the next log that is received until the primary and the standby are back in sync. The logs that are missing will still not be there but the database will have the right datafiles to move forward In Oracle 12c we can do this with 1 command in the standby database restoring the data from the primary through the TNS connection

What is the difference between a view and a materialized view?

A view is a database object created by a Select Statement that does not hold data at a point in time but acts as a window to data that resides in the tables mentioned in the select statement. A materialized view on the other hand actually holds the data and can be scheduled to refresh the data periodically. This is especially effective for queries that need to access data on a table at a particular time before changes have been made to it throughout the day.

Explain the difference between Automatic Memory Management (AMM) and Automatic Shared Memory Management (ASMM)

AMM- Automatic Memory Management allows for dynamic management of memory in Oracle. Oracle, based on current processing, automatically allocates from your target size of memory, the adequate memory required for the operation as opposed to the DBA doing it manually. ASMM- Automatic Shared Memory Management, introduced in 10g, is a predecessor of AMM. ASMM gives the DBA more direct control over the sizes of the System Global Area (SGA) and instance Program Global Area (PGA). You can disable Automatic memory management and enable Automatic shared memory management. So in, Automatic memory management You specify the target size for the database instance memory. The instance automatically tunes to the target memory size, redistributing memory as needed between the SGA and the instance PGA. This method is the simplest and is strongly recommended by Oracle. And with, Automatic shared memory management This management mode is partially automated. You set a target size for the SGA and the database dynamically tunes the sizes of the SGA components. And you then have the option to set an aggregate target size for the PGA or manage the PGA work areas individually. With Manual memory management, Instead of setting the total memory size, you set many initialization parameters to manage components of the SGA and instance PGA individually.

What is an Oracle instance?

AN Instance is the software ORACLE_HOME Then you have the database that attaches to the instance You can have multiple databases on a single instance

What is an index and why is it used?

An Index helps facilitate and improve performance of SQL statements running on a database. When a table has no Index, every time you run a query on that table there will be a full table scan, so If there are one billion records on that table our database will have to go through all one billion records to find what we are looking for. But let's say you place an index on the employees table using the field called employee_id for example, Anytime we are looking for a particular employee, Oracle will be able to use the index to track exactly where that employee's record is on the table without needing to do a full table scan. So an Index is a schema object that contains an entry for each value that appears in the indexed column or (s) of a table and provides direct, fast access to rows. When you make a field a primary key or unique key Oracle automatically creates an index because oracle believes you will use that field or fields for searching for records on that particular table. Using an index to improve performance is not always the solution but it is one way. SQL Tuning Advisor would be able to advise you if you needed to create an index or not. ** Example** Let's say you are going to see your friend/kid John at a College Campus. You go to the dorms and ask where is John? There could be hundreds of kids named John at that school and if you have no other information then you have to spend hours going room to room looking for your John doing a full campus scan. But if you have his student ID, then an index of student ID entries could immediately show you all the information of a particular student based on their student ID and take you straight to their dorm room number.

What is Fast-Time-Failover?

Fast-Start Failover is a feature that allows the Oracle Data Guard broker to failover a failed primary database automatically to a predetermined standby database. This feature increases the availability of the database by eliminating the need for a DBAs involvement as part of the failover process

How do you create a Listener?

By using Netca, or Netmgr.

How would you know what patch was applied recently?

By using Opatch list inventory, Opatch lsinventory

How can you check the integrity of your Backups?

By using RMAN VALIDATE

How would you start/stop the entire cluster including the database?

By using the (Cluster Control Utility) Crstcl command as the root user

How would you check the status of all cluster components in RAC?

By using the (Cluster Control Utility) Crstcl command, Crsctl status resource -t

How can you start a database in a RAC environment?

By using the (Server Control Utility) Srvctl command, srvctl start database -d "name of database"

How do check the status of an OEM agent?

By using the command Emctl status agent

How do check the status of OMS in OEM?

By using the command Emctl status oms

How would you run AWR Report with no OEM?

By using the command line with the seeded script found in $ORACLE_HOME/rdbms/admin

What will you use for upgrading a database?

DBUA Database Upgrade Assistant

What is the difference between DB time and Elapsed time?

Elapsed Time - When looking into AWR report, the Elapsed Time is the wall clock time of duration for which AWR report has been generated. For example, if we generate AWR report for 1 hour then Elapsed Time in AWR report will be 60 mins. DB Time - DB time is a statistic that represents CPU time consumed by all Oracle processes over a period of time plus non-idle wait time. It tells us how much activity is performed by the database over a duration of time.

What is Exadata?

Exadata is Oracle's customized Hardware (Server, Network, other components) and Software that are optimally tuned to work together for Oracle related technologies and many other vendor applications. The Oracle Cloud Infrastructure GEN2 runs completely on Exadata Servers.

What can you use to tune SQL queries?

Explain plan, SQL Tuning Advisor, AWR report


संबंधित स्टडी सेट्स

Chapter 8- Sleep A(Mini Quizzes)

View Set

Anatomy, Chapter 5 Review Questions

View Set

Introduction to Programming - TXC1

View Set

Online quiz questions and answers

View Set

Chapter 6: Disorders of the Breasts

View Set

(Book 2) Ch. 2 - Real Estate Appraisal

View Set

everyman character description/analysis

View Set