Sybase Interview Questions - Major

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

What would you do when the ASE server's performance is bad?

"Bad performance" is not a very meaningful term, so you'll need to get a more objective diagnosis first. Find out (i) what such a complaint is based on (clearly increasing response time or just a "feeling" that it's slower?). (ii) for which applications / queries / users this seems to be happening, and (iii) whether it happens continuously or just incidentally. Without identifying the specific, reproducible problem, any action is no better than speculation.

What is deadlock ?

A dead lock occurs when two or more user processes each have a lock on a separate page or table and each wants to acquire a lock on other process's page or table. The transaction with the least accumulated CPU time is killed and all of its work is rolled back.

What is a difference between a segment and a device?

A device is, well, a device: storage media that holds images of logical pages. A device will have a row in the sysdevices table. A fragment is a part of a device, indicating a range of virtual page numbers that have been assigned to hold the images of a range of logical page numbers belonging to one particular database. A fragment is represented by a row in sysusages. A segment is a label that can be attached to fragments. Objects can be associated with a particular segment (technically, each indid in sysindexes can be associated with a different segment). When future space is needed for the object, it will only be allocated from the free space on fragments that are labeled with that segment. There can be up to 32 segments in a database, and each fragment can be associated with any, all, or none of them (warnings are raised if there are no segments associated). Sysusages has a column called segmap which is a bitmapped index of which segments are associated, this maps to the syssegments table.

What is a natural key?

A natural key is a key for a given table that uniquely identifies the row.

Which is usually faster — a nonclustered index that covers the query or a clustered index?

A nonclustered index or a DOL clustered index that covers the query is usually faster than an APL clustered index.

What are the three ways that table data can be accessed?

A table scan, a nonclustered index scan, or a clustered index scan.

With respect to data placement, how do APL clustered indexes function differently from DOL clustered indexes?

APL clustered indexes maintain the table in index key order. As new rows are inserted, they must be inserted in their correct index position. DOL Clustered indexes are placement indexes. If the row fits on in its appropriate location, it will insert in index key order, otherwise it will be inserted into the page with the next available space to hold the row. This cause the DOL table to end up out of index key order.

What is optdiag used for?

ASE utility used to read, write, and simulate statistics.

What is the point in adding statistics on non-indexed columns - after all, they can't make the optimizer use an index if none exists?

Adding statistics on non-indexed columns can help the optimizer cost joins more accurately. Without statistics the optimizer makes assumptions as to the number of qualifying rows.

When is an adjusted query plan created and how does it differ from the optimal query plan?

An adjusted query plan is created when the number of worker processes specified in the optimal query plan is not available at runtime. The adjusted query plan differs from the optimal query plan in that it contains a smaller number of worker processes and it may use serial access instead of parallel access for some or all of the tables in the query.

What's the different between a primary key and unique key?

Both primary key and unique enforce uniqueness of the column on which they are define. But by default, primary key creates a clustered index on the column, where are unique creates a nonclustered index by default. Another major difference is that, primary key doesn't allow NULLs, but unique key allows one NULL only.

How do you optimize a stored procedure?

By creating appropriate indexes on tables. Writing a query based on the index and how to pick up the appropriate index.

How do you force a transaction to fail?

By killing a process you can force a transaction to fail.

What is candidate key, alternate key & composite key?

Candidate key: A primary key or unique constraint column. A table can have multiple candidate keys. Alternate key: Alternate key is a key which is declared as a second key in composite key. Composite key: An index key that includes two or more columns; for example authors(au_lname,au_fname)

What is sp_recompile?

Causes each stored procedure and trigger that uses the named table to be recompiles the next time it runs. Usage: The queries used by stored procedure and triggers are optimized only once, when they are compiled. As you add indexes or make other changes to your database that affect its statistics, your compiled stored procedures and triggers may lose efficiency. By recompiling the stored procedures and triggers that act on a table, you can optimize the queries for maximum efficiency.

SEtting shared memory

Check and adjust the operating system shared memory parameter. • To check the current shared memory size, enter: # /sbin/sysctl kernel.shmmax • To adjust the shared memory size, enter the following, where nnn is the new size in bytes (at least 64MB which is 67108864 bytes): # /sbin/sysctl -w kernel.shmmax=nnn To guarantee that this value is applied every time the system is started, add this line to your /etc/rc.d/ rc.local file (/etc/init.d/boot.local on SuSE systems).

What are constraints? Explain different types of constraints?

Constraints enable the RDBMS enforce the integrity of the database automatically, without needing you to create triggers, rule or defaults. Types of constraints: NOT NULL, CHECK, UNIQUE, PRIMARY KEY, FOREIGN KEY

How can you avoid last page contention on an APL heap table when many inserts and/or updates are made simultaneously?

Create a clustered index on the column that provides the most "randomness."

How do you BCP only a certain set of rows out of a large table?

If you're in ASE 11.5 or later, create a view for those rows and BCP out from the view. In earlier ASE versions, you'll have to select those rows into a separate table first and BCP out from that table. In both cases, the speed of copying the data depends on whether there is a suitable index for retrieving the rows.

Index in Sybase

In Sybase, "There are clustered and nonclustered indexes. A clustered index is a special type of index that reorders the way records in the table are physically stored. Therefore table can have only one clustered index. The leaf nodes of a clustered index contain the data pages. A nonclustered index is a special type of index in which the logical order of the index does not match the physical stored order of the rows on disk. The leaf node of a nonclustered index does not consist of the data pages. Instead, the leaf nodes contain index rows."

What are the most important DBA tasks?

In my opinion, these are (in order of importance): (i) ensure a proper database / log dump schedule for all databases (including master); (ii) run dbcc checkstorage on all databases regularly (at lease weekly), and follow up any corruption problems found; (iii) run update [index] statistics at least weekly on all user tables; (iv) monitor the server errorlog for messages indicating problems (daily). Of course, a DBA has many other things to do as well, such as supporting users & developers, monitor performance, etc.,

When does the optimizer rely on default selectivity values? Are they accurate? If not, can they be made more accurate?

In some cases where statistics are not available or usable, the optimizer may rely on default selectivity values which may or may not reflect reality. Default selectivity values can be modified to make them more accurately reflect the underlying data.

What type of updates are available to the optimizer?

In-place, cheap, expensive and deferred

What do you mean by query optimization?

It is nothing but assigning indexes to a table, so that query optimizer will prepare a query plan for a table & update the values in a table. With this performance increases.

What is the meaning of Join Transitive Closure (JTC)?

Join Transitive Closure (JTC) adds possible joins that are not explicitly stated in the where clause.

We have lost the sa password, what can we do?

Most people use the 'sa' account all of the time, which is fine if there is only ever one dba administering the sytem. If you have more than one person accessing the server using the 'sa' account, consider using sa_role enabled accounts and disabling the 'sa' account. Funnily enough, this is obviously what Sybase think because it is one of the questions in the certification exams. If you see that some is logged using the 'sa' account or is using an account with 'sa_role' enabled, then you can do the following: sp_configure "allow updates to system tables",1 go update syslogins set password =null where name = 'sa' go sp_password null,newPassword go

Is it a good idea to use data rows locking for all tables by default?

Not by default, only if you're having concurrency (locking) problems on a table, and you're not locking many rows of a table in a single transaction, then you could consider datarows locking for that table. In all other cases, use either data pages or all pages locking. (data pages locking as the default lock scheme for all tables because switching to datarows locking is fast and easy, whereas for all pages locking, the entire table has to be converted which may take long for large tables. Also, datapages locking has other advantages over all pages, such as not locking index pages, update statistics running at level 0, and the availability of the reorg command)

What is meant by the term "Index Covering"?

Only used with nonclustered indexes on APL tables and either clustered or nonclustered indexes on DOL tables (as DOL clustered indexes have a leaf level above the data level). It refers to the condition where the index pages contain all the necessary data to proved results of a query, therefore, Adaptive Server does not need to access the data pages.

Which optimizer statistics are maintained dynamically?

Page counts and row counts

What are the four (4) major phases of Adaptive Server Optimizer?

Parse, normalize, preprocessing, and optimize.

What is the name of the GUI tool provided with the product that can capture abstract plans?

Planviewer

Define two ways of measuring performance.

Response time and throughput.

What are the system roles and status by default ?

Sa_role, sso_role and oper_role are system roles. They are on by default.

In what two situations could specifying additional histogram steps be useful?

Specifying additional histogram steps can be useful: - if the table is large, the search value falls between steps, and the optimizer is unable to determine an accurate selectivity estimate.

Conceptually, how does Adaptive Server maintain statistics - as an attribute of an index or an attribute of a column?

Statistics are maintained as an attribute of a column, not an index. Even though statistics are generated on the leading column of an index when an index is created, they are generated only once for the column. Thus, only one set of statistics exists for a column no matter how many indexes the column is part of. Statistics can also be generated for minor index attributes and non-indexed columns.

What happens if housekeeper free write percent is set to zero?

The housekeeper does not run any of its tasks and so statistics are not flushed to systabstats.

What is housekeeper?

The housekeeper is a task that becomes active when no other tasks are active. It writes dirty pages to disk, reclaims lost space, flushes statistics to systabstats and checks license usage.

What are the main advantages and disadvantages of using identity columns?

The main advantage of an identity column is that it can generate unique, sequential numbers very efficiently, requiring only a minimal amount of I/O. The disadvantage is that the generated values themselves are not transactional, and that the identity values may jump enormously when the server is shutdown the rough way (resulting in "identity gaps"). You should therefore only use identity columns in applications if you've addressed these issues (go here for more information about identity gaps).

What is difference between managing permissions through users and groups or through user-defined roles?

The main difference is that user-defined roles (introduced in ASE 11.5) are server-wide and are grated to logins. Users and groups (the classic method that has always been there since the first version of Sybase) are limited to a single database. Permission can be grated / revoked to both user-defined roles and users / groups. Whichever method you choose, don't mix 'm, as the precedence rules are complicated.

What is the meaning of the value in the Num_Reuse column of the sp_monitorconfig report for the 'procedure cache size' parameter?

The number of query plans that have been removed from the cache to make space.

How many triggers will be fired if more than one row is inserted?

The numbers of rows you are inserting into a table, that many number of times trigger gets fire.

Is there any advantage in using 64-bit version of ASE instead of the 32-bit version?

The only difference is that the 64-bit version of ASE can handle a larger data cache than the 32-bit version, so you'd optimize on physical I/O. Therefore, this may be an advantage if the amount of data cache is currently a bottleneck. There's no pint in using 64-bit ASE with the same amount of "total memory" as for the 32-bit version, because 64-bit ASE comes with an additional overhead in memory usage - so that net amount of data cache would actually be less for 64-bit than 32-bit in this case.

When can the Query Optimizer use an index to provide results for a query?

The query optimizer can use an index only when a query references the indexed column(s) in the where clause.

What are the steps you will take to improve performance of a poor performing query?

This is very open ended question and there could be a lot of reasons behind the poor performance of a query. But some general issues that you could talk about would be: No indexes, table scans, missing or out of date statistics, blocking, excess recompilations of stored procedures, procedures and triggers without SET NOCOUNT ON, poorly written query with unnecessarily complicated joins, too much normalization, excess usage of cursors and temporary tables. Some of tools /ways that help you trouble shooting performance problems are : SET SHOWPLAN ON

What are advantage of using triggers?

To maintain the referential integrity.

What are different types of triggers?

Trigger is an event. That gets fires when an event occurs, such as Insert, Delete, Update. There are 3 types of triggers available with Sybase.

What is primary key, foreign key and unique key?

Unique key: It is a unique key which won't allow null values in a table. It is associated with clustered index. Primary key: The column or columns whose value uniquely identify a row in a table. It is a which allows null values. It is associated with non-clustered index. Foreign Key: A key column in a table that logically depends on a primary key column in another table. Also, a column ( or combination of columns) whose values are required to match a primary key in some other table.

What is update statistics?

Updates information about distribution of key values in specified indexes or for specified columns, for all columns in an index or for all columns in a table. Usage: ASE keeps statistics about the distribution of the key values in each index, and uses these statistics in its decisions about which indexes to use in query processing. Syntax: update statistics table_name [[index_name]| [(column_list)]] [ using step values] [ with consumers = consumers ] update index statistics table_name [index_name] [ using step values] [ with consumers = consumers ]

How do you optimize a select statement?

Using the SARG's in the where clause, checking the query plan using the set show plan on. If the query is not considering the proper index, then will have to force the correct index to run the query faster.

How can you see whether runtime adjustments are occurring?

Using the set process_limit_action command, showplan, or sp_sysmon.

During query plan optimization, what settings does the optimizer look at to determine the total number of worker processes During query plan optimization, what settings does the optimizer look at to determine the total number of worker processes available for the query?

When the optimizer compiles a query plan, it looks at max parallel degree, max scan parallel degree, and the parallel clause of the select statement to determine the total number of worker processes available for a query.

Is there any disadvantage of splitting up your application data into a number of different databases?

When there are relations between tables / objects across the different databases, then there is a disadvantage indeed: if you would restore a dump of one of the databases, those relations may not be consistent anymore. This means that you should always back up a consistent set of databases is the unit of backup / restore. Therefore, when making this kind of design decision, backup/restore issues should be considered (and the DBA should be consulted).

What you do when a segment gets full?

Wrong: a segment can never get full (even though some error messages state something to that extent). A segment is a "label" for one or more database device fragments; the fragments to which that label has been mapped can get full, but the segments themselves cannot. (Well, Ok, this is a bit of trick question... when those device fragments full up, you either add more space, or clean up old / redundant data.)

Are APL data page deallocations handled differently from DOL data page deallocations? If so, how?

Yes, when the last row is deleted on an APL data page, the page is deallocated and returned to the list of available pages for the table. DOL deletes are logical therefore, when the last row is deleted from the page it is not physically removed and the page will not be deallocated until the housekeeper GC performs garbage collection or reorg is executed against the table.

How can you avoid deferred updates?

You can avoid deferred updates by avoiding updates and deletes through joins, having an unique index somewhere on the table and using a different column in the where than the column being changed in the set clause.

Do we have to create sp_thresholdaction procedure on every segment or every database or any other place!?

You don't *have* to create threshold action procedures for any segment, but you *can* define thresholds on any segment. The log segment has a default "last chance" threshold set up that will call a procedure called "sp_thresholdaction". It is a good idea to define sp_thresholdaction, but you don't have to - if you don't you will just get a "proc not found" error when the log fills up and will have to take care of it manually. Thresholds are created only on segments, not on devices or databases. You can create them in sysprocedures with a name starting like "sp_" to have multiple databases share the same procedure, but often each database has its own requirements so they are created locally instead.

How do your move tempdb off of the master device?

a. This is Sybase TS method of removing most activity from the master device : Alter tempdb on another device: 1> alter database tempdb on ... 2> go drop the segments 3> sp_dropsegment "default", tempdb, master 4> go 5> sp_dropsegment "logsement",tempdb,master 6> go 7> sp_dropsegment "system", tempdb, master 8> go

Which of the three optimization goals is the default for ASE 15.0?

a. allrows_oltp b. allrows_mix c. allrows_dss Ans. a. allrows_mix

What is bit datatype and what's the information that can be stored inside a bit column?

bit datatype is used to store Boolean information like 1 or 0 (true or false). Until SQL Server 6.5 bit datatype could hold either a 1 or 0 and there was no support for NULL. But from SQL Server 7.0 onwards, bit datatype can represent a third state, which is NULL.

What are different statistic commands you use in UNIX?

i/o stat, netstat, vmstat, mpstat, psrstat

What are locks?

lock: A concurrency control mechanism that protects the integrity of data and transaction results in a multi-user environment. Adaptive Server applies page or table locks to prevent two users from attempting to change the same data at the same time, and to prevent processes that are selecting data from reading data that is in the process of being changed.

What is the difference between contents of monSysSQLText and monProcessSQLText MDA tables?

monSysSQLText provides a historical view of the most recently executed SQL text. monProcessSQLText table display SQL text that is currently being executed.

How will check fragmentation ?

optdiag statistics database[.owner[.[.table][.column]]] [-o output file] Eg. optdiag statistics warehouse5..orders Statistics for table: "orders" Data page count: 8615 Empty data page count: 0 Data row count: 149865.0000000000000000 Forwarded row count: 0.0000000000000000 Deleted row count: 0.0000000000000000 Data page CR count: 1077.0000000000000000 OAM + allocation page count: 38 First extent data pages: 0 Data row size: 110.4961665498949088 Derived statistics: Data page cluster ratio: 1.0000000000000000 Data page count: This is the number of data pages the table occupies. Empty data page count: This is the number of empty data pages. A nonzero count indicates fragmentation, which can be resolved with reorg. Data row count: This is the total number of data rows. Forwarded row count: This number denotes the number of forwarded rows in the table. It is always zero for allpages-locked (APL) tables. A nonzero count indicates fragmentation, since each forwarded row requires an additional I/O to store the new row on another page. This fragmentation can be resolved with reorg. Deleted row count: This number denotes the number of deleted rows. A nonzero count indicates fragmentation, which can be resolved with reorg.

What are levels of lock?

page level, table level, row level,

dump transaction with standby_access is used to?

provide a transaction log dump with no active transactions

When to run a reorg command?

reorg is useful when: • A large number of forwarded rows causes extra I/O during read operations. •Inserts and serializable reads are slow because they encounter pages with noncontiguous free space that needs to be reclaimed. • Large I/O operations are slow because of low cluster ratios for data and index pages. •sp_chgattribute was used to change a space management setting (reservepagegap, fillfactor, or exp_row_size) and the change is to be applied to all existing rows and pages in a table, not just to future updates.

How do u tell the data time of server started?

select "Server Start Time" = crdate from master..sydatabases where name = "tempdb" or select * from sysengines

How can you display the details of missing statistics from join columns?

set option show_missing_stats will display the details of missing statistics from join columns

What command would you use to run all queries in the current session in serial mode?

set parallel_degree 1

In ASE 15.0, what is the option used to control Run Time Adjustment behavior?

set process_limit_action quite|warning|abort

What command do you use to flush statistics from memory?

sp_flushstats <object_name>

In which system tables are optimizer statistics stored?

systabstats and sysstatistics

How do you trace H/W signals?

with TRAP command.

What are work tables? What is the limit?

work tables are created automatically in tempdb in Adaptive server merge joins, sorts and other internal processes. There is a limit for work tables to 14. System will create max of 14 work tables for a query.

Describe differences between chained mode and unchained mode?

· Chained mode is ANSI-89 complaint, where as unchained mode is not. · In chained mode the server executes an implicit begin tran, where as in unchained mode an explicit begin tran is required.

What are the 4 isolation levels, which was the default one?

· Level 0 - read uncommitted/ dirty reads · Level 1 - read committed - default. · Level 2 - repeatable read · Level 3 - serializable


संबंधित स्टडी सेट्स

Principles of Accounting Chapter 8

View Set

COMPTIA ITF+ 3.1 Explain the purpose of operating systems

View Set

BA 3308 - Test 1 - Quiz Questions

View Set

MedSurg Exam 4 Review (Evolve Questions)

View Set