Monday, December 31, 2012

Query to find out biggest table in teradata

Here is query you can find out biggest table in the system. top clause can be modified to display top 10,100 biggest tables in the system.

select top 1 databasename, tablename ,sum(currentperm) from dbc.tablesize  group by 1,2 order by 3 desc;                                                                                                                                          

How to set timezone in Teradata?

Timezone can be set in Teradata in System level, User level or in Session level. to set time zone in system level you need to modify dbs control parameter.

Timezone can be set at user level while creating user as

Create user user name TIME ZONE=LOCAL (or TIME ZONE='5:30',TIME ZONE=NULL).


At session level it can be set by set command. to set timezone to local (i.e. Current System's Timezone) you can issue in current session.

SET TIME ZONE LOCAL

to set time zone to a specific timezone issue below command.


SET TIME ZONE INTERVAL '05:00' HOUR TO MINUTE



Sunday, December 23, 2012

What is queue table in Teradata?


Queue Tables feature simplifies the implementation of asynchronous, event-driven applications. Performance is improved by saving CPU and network resources.Queue tables possess all the properties
of regular Teradata Database tables:persistence, scalability, parallelism, ease of use, reliability, availability  recovery, and security. In addition, queue tables possess FIFO queue properties such as push, pop, and peek queue operations.

 These properties allow queue tables to provide flexibility,power, functionality, and leverage the natural performance characteristic of the Teradata Database. With queue tables, the Teradata Database
provides the capability to run real-time,event-driven active data warehouse activity while still having access to historical data providing the capability to make more effective business decisions in realtime
processing environments.


The queue table definition is different from a standard base table in that a queue table always contains a user-defined insertion timestamp (QITS) as the first column of the table. The QITS contains the time the row was inserted into the queue table as the means for approximate FIFO ordering.

Queue properties are implemented as following.


  1. The FIFO push operation is defined as a SQL INSERT operation to store rows into a queue table.
  2. The FIFO peek operation is defined as an SQL SELECT operation to retrieve rows from a queue table without deleting them. This is also referred to as browse mode.
  3. The FIFO pop operation is defined as an SQL SELECT AND CONSUME operation to retrieve a row from a queue table and delete that selected row upon completion of the read. This is also referred to as consume mode.A consume mode request goes into a delayed state when a SELECT AND CONSUME finds no rows in the queue table. The request remains idle until an INSERT to that queue table awakens the request; that row is returned, and then it’s deleted. Consumed rows are returned in FIFO order.
create queue table syntax :

create table prod.myquetable, queue
(
create_timestamp timestamp(6) not null default current_timestamp(6),
uid int, 
uname int
) ; 



Saturday, December 1, 2012

what is logon privilege in Teradata?


When a user is created in the database it automatically gets database logon privileges, and can
logon from any configured, connected client using Teradata authentication (TD2 mechanism).

By default, the database automatically grants permission to log on for all users defined in the
database, from all client system connections (hostids).


You can use the REVOKE LOGON statement to restrict:
• All logons to the database for a particular user
• Logons to the database by a user through one or more client connections (hostids).

Example :

GRANT/REVOKE LOGON ON hostid1,hostid2.. FROM username1,username2..

or


GRANT/REVOKE LOGON ON All FROM username1,username2..

where hostid corresponds to a host group (HostNo value).





Monday, November 12, 2012

Enabling and Disabling query logging in Teradata


Database query logging (DBQL) is an optional feature in Teradata through which we can track processing behavior of database by logging user activity. User activity can be logged in various level namely object, sql text, steps, explain etc. one can also set limit to the length of query text to be logged.
since this is resource consuming task that database has to perform on each activity, on should assess impact on database performance before enabling it in various level.

Query logging can be enabled on user using its account name. Every user created in database has its account name (account string) assigned while user creation, though account string can be assigned to a user or group of user manually.

Account string of a user can be verified by querying "dbc.accountinfo" view or better edit profile to check account string.

select * from dbc.accountinfo where username =’username’


The DBQL can be enable for specific user using their account string by below command

begin query logging with explain,objects, sql on all account=(' $M$BUSI$S$D$H')

in the above command query logging will be enabled for  objects, sql text and explain  on all user having account ‘$M$BUSI$S$D$H' .  There are few more detail option for query logging is available namely step logging, xml explain, summery logging.  The text size of sql text can be limit by limit clause as below. Here 0 means unlimited whereas any positive integer will limits to its value.


begin query logging with explain,sql limit sqltext=0 on all account=(' $M$BUSI$S$D$H')

Query logging can be disable with end query logging command as show below.

end query logging with explain,objects, sql on all account=(' $M$BUSI$S$D$H')


DBQL can be enabled for all users irrespective of their account string by below command

Begin query logging with objects, sql limit sqltext=0 on all


Finally DBQL rules can be verified by querying "dbc.dbqlrules" view.

select * from dbc.dbqlrules

Sunday, November 4, 2012

Where does TD store transient journal?


Transient Journal (TJ) is an area of space in the DBC database which is used primarily for storing of roll-back/undo  information during inserts/deletes/updates on a tables. 

TJ require perm space and is stored in "dbc.transientjournal". This special type of table can grow beyond dbc's  perm space limit until the whole system runs out of perm space.

It is good practice to backup Transient Journal regularly and delete to enhance query and load performance.

Saturday, November 3, 2012

Few differences between Global Temporary Table and Volatile Table

 Global Temporary tables (GTT) :

  • Table Definition is stored  into Data Dictionary.
  • Data is stored in temp space.
  • GTT data is active upto the session ends, but table definition will remain there in Data dictionaly untill is is dropped  using Droptable statement.
  • secondry Index can be created on Global Temporary table.
  • Stats can be collected on GTT.
  • CHECK or BETWEEN constraints, COMPRESS column and DEFAULT and TITLE clause are supported by Global Temporary table.
  • In a single session 2000 Global temporary table can be materialized.


Volatile Temporary tables (VTT) :

  • Table Definition is stored in System cache.
  • Data is stored in spool space.
  • VTT data and table definition both are active only upto session ends.
  • secondry Index can not be created on Global Temporary table.
  • stats cannot be collected on VTT.
  • CHECK or BETWEEN constraints, COMPRESS column and DEFAULT and TITLE clause are not supported by GTT.
  • In a single session 1000 Volatile temporary table can be materialized.
  • VTT does not support default value for a column while creating a table.

Wednesday, October 31, 2012

How to find join indexes in database?


for newbie's there are various question that he may struck with, one of common question may be of these.

Join index can be found for a database by querying dbc.table.

select * from dbc.tables where databasename='mydatabasename' and tablekind='i';

OR


SELECT  * FROM dbc.indices 
WHERE indextype='J'
     AND databasename='mydatabase'
     AND tablename='mytablename'
ORDER BY indexname,columnposition;

Saturday, October 27, 2012

3055 Logons are only enabled for user DBC

In had an incident when we were unable to log-in to Teradata system from any user. the error was reported was "3055:Logons are only enabled for user DBC". this normally happens after some maintenance work or after up-gradation/migration of database or may be database administrator intentionally put restriction.

To sort this issue out, below steps can be followed:

1. Login to the node
2. type command "cnsterm 6" and press enter
3. in supervisor command prompt, type "enable all logons" and press enter.

         
         Input Supervisor Command:
         > enable all logons
         enable all logons

         12/10/26 03:17:47 Logons enabled.
         Enable Logons of screen debug in ctl have been updated to All.



2616: Numeric overflow occurred during computation.

In Teradata normally we come across this error while selecting column value which is too large to be accommodated in requested datatype.

 In below example we tried to count a  table which has more than 6 billion records, so certainly the count would return result in integer and Standard max value for integer data type is 2147483647. so whenever selected value exceeds this range it will throw this error.

select count(*)  from bm_tb.cust_priscription;

SELECT Failed. 2616: Numeric overflow occurred during computation. 

In this case to get the proper result we need to cast the returned integer value to float or decimal.

select count(*)  (float) from bm_tb.cust_priscription;

6,806,668,046.00

Or

select count(*)  (DECIMAL(12,0))  from bm_tb.cust_priscription;

6,806,668,046


You can try this demo to understand integer limit in SQLA:

SELECT CAST( 2147483647 AS INT)

Result: 2147483647

Then try to select integer value greater than the integer limit.

SELECT CAST( 2147483648 AS INT)

Result: "SELECT Failed. 2616: Numeric overflow occurred during computation."


Thursday, October 25, 2012

When should the statistics be collected?

We should collect statistics on following cases..
  • When statistics are stale, you can help stats on table to find last stats collection date.
  • On all Non-Unique indices· 
  • On Non-index join columns·
  • On  Primary Index of small tables· 
  • On Primary Index of a Join Index·
  •  On Secondary Indices defined on any join index· 
  • When Join index columns that frequently appear on any additional join index columns that frequently appear in WHERE search conditions· 
  • On Columns that frequently appear in WHERE search conditions or in the ON clause of joins

Wednesday, October 24, 2012

change password prompt at first login in teradata

In teradata, table for default user configurations is "dbc.syssecdefaults" and if user is assigned profile then configuration table is "DBC.PROFILES". the password change at first login means making users password temporary or set .EXPIREPASSWORD attribute to 0.

Through Teradata administrator we can modify the user and tick the temporary flag, but this seems not working sometime.

so we have a command to server this purpose,

MODIFY USER myusername AS PASSWORD = myteradata FOR USER ;

this command will set .EXPIREPASSWORD to 0 and will make the password temporary in effect.

Note : please note that above command will work only when if Expire field for profile is not already set to zero, as shown. there should be non zero value in this field and no user session should not be connected.










How to find the Release and Version information in Teradata


To find Release and Version information you can query this Data Dictionary DBC.DBCINFO table.

SELECT * FROM DBC.DBCINFO;

Tuesday, October 23, 2012

What is High confidence, Low confidence and No confidence in EXPLAIN plan?


Explain gives the execution strategy based on dictionary information (stats) it has about object in questions.

  • HIGH CONFIDENCE: Statistics are collected.
  • LOW CONFIDENCE: Statistics are not collected.But the where condition is having the condition on indexedcolumn.Then estimations can be based on sampling.
  • NO CONFIDENCE: Statistics are not collected and the condition is on non indexed column

What are Global temporary tables and Volatile temporary tables?


 Global Temporary tables (GTT):

1. When they are created, its definition goes into Data Dictionary.
2. When materialized data goes in temp space.
3. That's why, data is active till the session persist, 
    and definition will remain there up-to its not dropped using Drop table statement.
    If dropped from some other session then its should be Drop table all;
4. you can collect stats on GTT.

Volatile Temporary tables (VTT):

1. Table Definition is stored in System cache
2. Data is stored in spool space.
3. That's why, data and table definition both are active till session persist.
4. No collect stats for VTT. 
5. If you are using volatile table, you can not put the default values on column level while creating table.

Monday, October 22, 2012

3668: A table listed in the Dictionary is not present


This error shows up when restore of table is failed in middle, and user tries to select from this table.

LOCKING Failed. 3668: A table listed in the Dictionary is not present. 

solution would be to drop the table and recreate/restore/copy with latest backup.


Saturday, October 20, 2012

How to query databases space of whole system.

We can query dbc.Diskspace to know all about space in system. here is query through which we can get idea about Maximum permanent space allocated, how much space is currently consumed and how much is free, with details about % of use and % of space usage skew.


SELECT Databasename AS "Database Name",
Max_perm_gb (DECIMAL (10,1)) AS "Max Perm (GB)",
current_perm_gb (DECIMAL (10,1)) AS "Current Perm (GB)",
EffectiveUsedSpace_Gb (DECIMAL (10,1)) AS "Effective Used Space (GB)",
Current_Perm_Percent AS "Current Perm %",
(Max_perm_gb (DECIMAL (10,1))) -( current_perm_gb (DECIMAL (10,1))) as "unused space in GB" ,
(100) - (Current_Perm_Percent)  as "unused %",
Peak_Perm_Gb (DECIMAL (10,1)) AS "Peak Perm (GB)",
Peak_Perm_Percent AS "Peak Perm %",
D_SKEW AS "Skew %"
FROM
(
SELECT
databasename, SUM(maxperm)/1024/1024/1024 AS Max_Perm_Gb,
SUM(currentperm)/1024/1024/1024 AS Current_Perm_Gb,
MAX(currentperm)*COUNT(*) /1024/1024/1024 AS EffectiveUsedSpace_Gb,
Current_Perm_Gb/Max_Perm_Gb * 100 AS Current_Perm_Percent,
SUM(peakperm)/1024/1024/1024 AS Peak_Perm_Gb,
Peak_Perm_Gb/Max_Perm_Gb * 100 AS Peak_Perm_Percent,
ZEROIFNULL((MAX(currentperm) - AVG(currentperm))/MAX(NULLIFZERO(currentperm)) * 100) AS D_SKEW
FROM dbc.diskspace
GROUP BY 1
) a
ORDER BY 2 DESC;

Friday, October 19, 2012

prepare script for enabling and disabling trigger in teradata


This kind of scripts are required when the system is going for maintenance.

SELECT 'Alter trigger '|| TRIM(databasename) || '.' || TRIM(triggername) || ' Disabled' ||';'
FROM dbc.triggers
WHERE  Enabledflag = 'Y'
ORDER BY databasename, tablename, triggername;

Thursday, October 18, 2012

List of Teradata Database Access Rights


Here is list of access right and their full-form.

AE = ALTER EXTERNAL PROCEDURE
AF = ALTER FUNCTION
AP = ALTER PROCEDURE
AS = ABORT SESSION
CA = CREATE AUTHORIZATION
CD = CREATE DATABASE
CE = CREATE EXTERNAL PROCEDURE
CF = CREATE FUNCTION
CG = CREATE TRIGGER
CM = CREATE MACRO
CO = CREATE PROFILE
CP = CHECKPOINT
CR = CREATE ROLE
CT = CREATE TABLE
CU = CREATE USER
CV = CREATE VIEW
D = DELETE
DA = DROP AUTHORIZATION
DD = DROP DATABASE
DF = DROP FUNCTION
DG = DROP TRIGGER
DM = DROP MACRO
DO = DROP PROFILE
DP = DUMP
DR = DROP ROLE
DT = DROP TABLE
DU = DROP USER
DV = DROP VIEW
E = EXECUTE
EF = EXECUTE FUNCTION
GC = CREATE GLOP
GD = DROP GLOP
GM = GLOP MEMBER
I = INSERT
IX = INDEX
MR = MONITOR RESOURCE
MS = MONITOR SESSION
NT = NONTEMPORAL
OD = OVERRIDE DELETE POLICY (Reserved for future use)
OI = OVERRIDE INSERT POLICY (Reserved for future use)
OP = CREATE OWNER PROCEDURE
OS = OVERRIDE SELECT POLICY (Reserved for future use)
OU = OVERRIDE UPDATE POLICY (Reserved for future use)
PC = CREATE PROCEDURE
PD = DROP PROCEDURE
PE = EXECUTE PROCEDURE
R = RETRIEVE/SELECT
RF = REFERENCE
RO = REPLCONTROL
RS = RESTORE
SA = SECURITY CONSTRAINT ASSIGNMENT
SD = SECURITY CONSTRAINT DEFINITION
SH = SHOW
SR = SET RESOURCE RATE
SS = SET SESSION RATE
ST = STATISTICS
TH = CTCONTROL
U = UPDATE
UM = UDT Method
UT = UDT Type
UU = UDT Usage

Wednesday, October 17, 2012

Teradata Show lock utility


When A table or database is being backed up, it put a HUT lock on the object, which is released when it finishes backup successfully. But sometime when backup fails or aborted HUT lock be there and query on those objects goes into blocking.  One way of find and release lock on object is to login into one of the node and use show lock utility.

Here is the steps we can follow...

1. start CNSTERM 6 by typing "cnsterm 6"


UAT2_1-9:/var/opt/teradata/DEV_108 # cnsterm 6
Attempting to connect to CNS...Completed.
Hello
Started 'dip' in window 1
 at Tue Oct 16 04:03:29 2012


Input Supervisor Command:
start updatedbc

Started 'updatedbc' in window 1
 at Tue Oct 16 04:18:18 2012


Input Supervisor Command:
enable all logons

12/10/16 05:11:03 Logons enabled.
Enable Logons of screen debug in ctl have been updated to All.


Input Supervisor Command:
start dip

Started 'dip' in window 1
 at Tue Oct 16 05:19:50 2012

2. Start "showlock utility" by typing "start showlocks" on CNSTERM 6 terminal.


Input Supervisor Command:
> start showlocks
start showlocks

Started 'showlocks' in window 1
 at Wed Oct 17 09:22:22 2012


Input Supervisor Command:
> 

 3. Start CNSTERM 1 in different putty session to see the hut lock..



UAT2_1-9:/var/opt/teradata/DEV_108 # cnsterm 1
Attempting to connect to CNS...Completed.
Hello
    USER arcuser4   MODE Excl     AMP All Amps

GL_TB.TRUNK_GS_SUMMARY
    USER arcuser4   MODE Excl     AMP All Amps

GL_TB.VERTICAL
    USER arcuser4   MODE Excl     AMP All Amps

GL_TB.ZZZ_GL_PARAMETER
    USER arcuser4   MODE Excl     AMP All Amps

GL_TB.ZZZ_TB_BI_DIM_STORE
    USER arcuser4   MODE Excl     AMP All Amps

GL_TB.ZZZ_TB_BI_DIM_STORE_OPER_HRS
    USER arcuser4   MODE Excl     AMP All Amps

GL_TB.base64
    USER arcuser4   MODE Excl     AMP All Amps

              --ShowLocks Processing Complete--

Now, once we get the list of object having HUT lock, we can login into the "arcmain" session and put release lock command on those object.

4. type "arcmain" and enter

5. type  “Logon 10.12.88.120/arcuser4,password;”

6. type “Release lock (GL_TB.TRUNK_GS_SUMMARY);”

7. Logoff;

Tuesday, October 16, 2012

Get current session information in Teradata


You can find current user of a session by simply querying ..

select user;

the whole session information can be queried with statement.

help session;

Sunday, October 14, 2012

Different Types of Joins in Teradata



Teradata joins

A join is an action that projects 1 columns from two or more tables into a new virtual table.
The Teradata Database supports joins of as many as 64 tables per query.
Less formally, a join is an action that retrieves column values from more than one table

Joins are need to extract some useful information from related tables in RDBMS. a common column or set of columns are joined between two tables of same info. group to get meaningful information

In Teradata, some of common join types are used to write Query are

- Inner join (can also be "self join" in some cases)
- Outer Join (Left, Right, Full)
- Cross join (Cartesian product join)

In Teradata, we have Optimizer (a parallel ware optimizer), which determines type of join strategy to be used based on user query taking performance factor in mind.


When User provides join query, optimizer will come up with join plans to perform joins. These Join strategies include

- Merge Join
- Nested Join
- Hash Join
- Product join
- Exclusion Join


 Merge Join
--------------------

Merge join is a concept in which rows to be joined must be present in same AMP. If the rows to be joined are not on the same AMP, Teradata will either redistribute the data or duplicate the data in spool to make that happen based on row hash of the columns involved in the joins WHERE Clause.
            If two tables to be joined have same primary Index, then the records will be present in Same AMP and Re-Distribution of records is not required.

There are four scenarios in which redistribution can happen for Merge Join
Case 1: If joining columns are on UPI UPI, the records to be joined are present in Same AMP and redistribution is not required. This is most efficient and fastest join strategy
Case 2: If joining columns are on UPI = Non Index column, the records in 2nd table has to be redistributed on AMP's based on data corresponding to first table.
Case 3: If joining columns are on Non Index column = Non Index column , the both the tables are to be redistributed so that matching data lies on same amp , so the join can happen on redistributed data.  This strategy is time consuming since complete redistribution of both the tables takes across all the amps
Case 4: For join happening on Primary Index, If the Referenced table (second table in the join) is very small, then this table is duplicated /copied on to every AMP.

 Nested Join
-------------------
Nested Join is one of the most precise join plans   suggested by Optimizer .Nested Join works on UPI/USI used in Join statement and is used to retrieve the single row from first table . It then checks for one more matching rows in second table based on being used in the join using an index (primary or secondary) and returns the matching results.

Example:
Select EMP.Ename , DEP.DeptnoEMP.salary
from
EMPLOYEE EMP ,
DEPARTMENT DEP
Where EMP.Enum = DEP.Enum
and EMp.Enum2345;    -- this results in nested join

 Hash join
----------------
Hash join is one of the plans suggested by Optimizer based on joining conditions. We can say Hash Join to be close relative of Merge based on its functionality. In case of merge join, joining would happen in same amp.   In Hash Join, one or both tables which are on same amp are fit completely inside the AMP's Memory   . Amp chooses to hold small tables in its memory for joins happening on ROW hash.

Advantages of Hash joins are
1. They are faster than Merge joins since the large table doesn’t need to be sorted.
2. Since the join happening b/w table in AMP memory and table in unsorted spool, it happens so quickly.

 Exclusion Join
-------------------------

These type of joins are suggested by optimizer when following are used in the queries
- NOT IN
- EXCEPT
- MINUS
- SET subtraction operations

Select EMP.Ename , DEP.DeptnoEMP.salary
from
EMPLOYEE EMP
WHERE EMP.Enum NOT IN
Select Enum from
DEPARTMENT DEP
where Enum is NOT NULL );

Please make sure to add an additional WHERE filter “with <column> IS NOT NULL” since usage of NULL in a NOT IN <column> list will return no results.

Exclusion join for following NOT In query has 3 scenarios

Case 1: matched data in "NOT IN" sub Query will disqualify that row
Case 2: Non-matched data in "NOT IN" sub Query will qualify that row
Case 3: Any Unknown result in "NOT IN" will disqualify that row - ('NULL' is a typical example of this scenario). 

Saturday, October 13, 2012

7535: Operation not allowed: table header has invalid partitioning.


This can occur due to a restore/copy of the table to another system with different hardware or operating system. In this case, the partitioning expressions need to be regenerated and updated in the table headers of the table. 

Example:

SELECT Failed. 7535: Operation not allowed: MTD_DB.MTD_MSTR table header has invalid partitioning. 

solution to this is validating table header with primary index of table.
For that we can ALTER the TABLE with the REVALIDATE PRIMARY INDEX option.

here we go... command is ..


alter table Mtd_tb.mtd_mstr revalidate primary index;




Script to replicate privileges.

sometime we may need to replicate exact access grant of an user to another user. here is the script to prepare "Grant" statements of a particular user..
Here we go..

SELECT
'GRANT '
|| CASE
WHEN ACCESSRIGHT = 'AP' THEN 'ALTER PROCEDURE'
WHEN ACCESSRIGHT = 'AS' THEN 'ABORT SESSION'
WHEN ACCESSRIGHT = 'CD' THEN 'CREATE DATABASE'
WHEN ACCESSRIGHT = 'CG' THEN 'CREATE TRIGGER'
WHEN ACCESSRIGHT = 'CM' THEN 'CREATE MACRO'
WHEN ACCESSRIGHT = 'CO' THEN 'CREATE PROFILE'
WHEN ACCESSRIGHT = 'CP' THEN 'CHECKPOINT'
WHEN ACCESSRIGHT = 'CR' THEN 'CREATE ROLE'
WHEN ACCESSRIGHT = 'CT' THEN 'CREATE TABLE'
WHEN ACCESSRIGHT = 'CU' THEN 'CREATE USER'
WHEN ACCESSRIGHT = 'CV' THEN 'CREATE VIEW'
WHEN ACCESSRIGHT = 'D' THEN 'DELETE'
WHEN ACCESSRIGHT = 'DD' THEN 'DROP DATABASE'
WHEN ACCESSRIGHT = 'DG' THEN 'DROP TRIGGER'
WHEN ACCESSRIGHT = 'DM' THEN 'DROP MACRO'
WHEN ACCESSRIGHT = 'DO' THEN 'DROP PROFILE'
WHEN ACCESSRIGHT = 'DP' THEN 'DUMP'
WHEN ACCESSRIGHT = 'DR' THEN 'DROP ROLE'
WHEN ACCESSRIGHT = 'DT' THEN 'DROP TABLE'
WHEN ACCESSRIGHT = 'DU' THEN 'DROP USER'
WHEN ACCESSRIGHT = 'DV' THEN 'DROP VIEW'
WHEN ACCESSRIGHT = 'E' THEN 'EXECUTE'
WHEN ACCESSRIGHT = 'I' THEN 'INSERT'
WHEN ACCESSRIGHT = 'IX' THEN 'INDEX'
WHEN ACCESSRIGHT = 'MR' THEN 'MONITOR RESOURCE'
WHEN ACCESSRIGHT = 'MS' THEN 'MONITOR SESSION'
WHEN ACCESSRIGHT = 'PC' THEN 'CREATE PROCEDURE'
WHEN ACCESSRIGHT = 'PD' THEN 'DROP PROCEDURE'
WHEN ACCESSRIGHT = 'PE' THEN 'EXECUTE PROCEDURE'
WHEN ACCESSRIGHT = 'RO' THEN 'REPLICATION OVERRIDE'
WHEN ACCESSRIGHT = 'R'THEN 'SELECT'
WHEN ACCESSRIGHT = 'RF' THEN 'REFERENCE'
WHEN ACCESSRIGHT = 'RS' THEN 'RESTORE'
WHEN ACCESSRIGHT = 'SS' THEN 'SET SESSION RATE'
WHEN ACCESSRIGHT = 'SR' THEN 'SET RESOURCE RATE'
WHEN ACCESSRIGHT = 'U' THEN 'UPDATE'
END
|| ' ON '
|| TRIM(DATABASENAME) || CASE WHEN TABLENAME<>'ALL' THEN '.'||TABLENAME END
|| ' TO TargetUserName '
|| CASE WHEN GRANTAUTHORITY = 'Y' THEN ' WITH GRANT OPTION;' ELSE ' ;' END
FROM DBC.ALLRIGHTS
WHERE USERNAME = 'SourceUserName'  --- and Databasename='Targetdatabasename'
GROUP BY 1
;


Note: this would generate privileges assigned directly to the user, not through role

Friday, October 12, 2012

Kill session in teradata


We can kill/abort the session from SQL command as below.

1. Find the session info as:

select * from dbc.SessionInfo;

2. Execute abortsession function:                

SELECT SYSLIB.AbortSessions(1,'MyUser',0,'Y','Y');

Param1: Hostno
Param2:UserName
Param3: SessionNo
Param4: LogoffSessions
Param5: UserOverride

Where 0 in Param3 means all session of user will be aborted,you can specify sessionno to kill a specific session.
Param4=Y means logoff the session first.


Tuesday, October 9, 2012

Difference between timestamp(0) and timestamp(6)


TIMESTAMP types are imported and exported in record and indicator modes as
CHARACTER data using the ANSI format string and the site-defined client character set.

The basic form of TIMESTAMP is TIMESTAMP (0) which is represented as CHAR (19) in ANSI string with format (YYYY-MM-DD HH:MI:SS)

Other more precise time stamps can be represented as  TIMESTAMP(n), Where n= 1 to 6 is represented in CHAR (20+n). this can be used to store fraction of seconds data (i.e. miliseconds of data can be stored).


Hence TIMESTAMP (6) is represented as CHAR (26)
and TIMESTAMP(6) can be formated in ANSI Character string as YYYY-MM-DDbHH:MI:SS.ssssss.



Thursday, August 2, 2012

PPI Backup restore Walk-through


PPI Backup and restore.


Consider the following when archiving selected partitions in PPI tables:


A restore operation always deletes the selected partitions of the target table before
restoring the rows that are stored in the archive.
Archiving selected partitions operates on complete partitions within tables, meaning that
the selection of a partial partition implies the entire partition.
PPI and non-PPI tables are permissible in a single command. Both table types can be
managed in a single database with the EXCLUDE TABLES option.
Partitioning is based on one or more columns specified in the table definition.
Partition elimination restricts a query to operating only in the set of partitions that are
Required for the query.
Incremental archives are possible by using a partition expression that is based on date
fields, which indicate when a row is inserted or updated.
An archive or restore of selected partitions only places full-table HUT locks. HUT locks on
Individual partitions are not supported.
Re-collect table statistics after a restore of selected partitions. Statistics are part of the table
dictionary rows, which are not restored during a partition-level restore.
If a table has a partitioning expression that is different from the partitioning expression
used in the PPI archive, a PPI restore is possible as long as no other significant DDL
changes are made to the table.
• For restore full backup should be restored beforehand, for copy table with same definition should present.


Scripts Used:


Backing of PPI using date range.


archive data table
(test1.tranhist) (partitions where (!transactiondate between date '2001-01-01' and date '2001-05-01'!)),
release lock,
file=arcpart;

Restoing Prartition.


.logon localhost/sysdba,sysdba;
restore data table
(test1.tranhist) (partitions where (!transactiondate between date '2001-01-01' and date '2001-05-01'!)),
release lock,
file=arcpart;
  

Copying Partition from Backupset.


.logon localhost/sysdba,sysdba;
copy data table
(test2.tranhist)(from (test1.tranhist), partitions where (!transactiondate between date '2001-01-01' and date '2001-05-01'!)),
release lock,
file=arcpart;

Walk-through of test conducted:

Created a Partition tables.


CREATE SET TABLE test1.tranhist ,FALLBACK ,
     DUAL BEFORE JOURNAL,
     NO AFTER JOURNAL,
     WITH JOURNAL TABLE = test1.bcd ,
     CHECKSUM = DEFAULT,
     DEFAULT MERGEBLOCKRATIO
     (
      TransactionID INTEGER,
      TransactionDate DATE FORMAT 'YYYY-MM-DD',
      TransactionParam1 INTEGER)
PRIMARY INDEX ( TransactionID )
PARTITION BY RANGE_N(TransactionDate  BETWEEN DATE '2000-01-01' AND DATE '
2004-12-31' EACH INTERVAL '1' MONTH );


Inserted Record for every partition range.
    
insert into test1.tranhist values (1,'2000/03/04',2);
insert into test1.tranhist values (15,'2002/01/19',2);
insert into test1.tranhist values (21,'2000/08/04',2);
insert into test1.tranhist values (13,'2002/02/04',2);
insert into test1.tranhist values (21,'2000/08/19',2);
insert into test1.tranhist values (13,'2002/02/15',2);
insert into test1.tranhist values (19,'2001/05/04',2);
insert into test1.tranhist values (17,'2003/01/04',2);
insert into test1.tranhist values (19,'2001/05/17',2);
insert into test1.tranhist values (17,'2003/01/19',2);
insert into test1.tranhist values (20,'2001/06/04',2);
insert into test1.tranhist values (16,'2003/01/04',2);
insert into test1.tranhist values (20,'2001/06/17',2);
insert into test1.tranhist values (16,'2003/01/17',2);
insert into test1.tranhist values (7,'2002/03/04',2);
insert into test1.tranhist values (15,'2004/01/04',2);
insert into test1.tranhist values (7,'2002/03/19',2);
insert into test1.tranhist values (14,'2004/01/04',2);
insert into test1.tranhist values (8,'2002/03/04',2);
insert into test1.tranhist values (8,'2002/03/12',2);
insert into test1.tranhist values (6,'2002/03/04',2);
insert into test1.tranhist values (6,'2002/03/01',2);
insert into test1.tranhist values (3,'2002/04/04',2);
insert into test1.tranhist values (18,'2002/05/04',2);
insert into test1.tranhist values (18,'2002/05/19',2);
insert into test1.tranhist values (3,'2002/11/02',2);
insert into test1.tranhist values (5,'2003/04/04',2);
insert into test1.tranhist values (4,'2003/04/04',2);
insert into test1.tranhist values (5,'2003/08/02',2);
insert into test1.tranhist values (4,'2003/10/03',2);
insert into test1.tranhist values (4,'2003/04/04',2);
insert into test1.tranhist values (5,'2003/08/02',2);
insert into test1.tranhist values (4,'2003/10/03',2);


Take a full Backup of database test1.


.logon localhost/sysdba,sysdba;
archive data table
(test1.tranhist),
release lock,
file=arcpart;

Copy to other database (test2).

    .logon localhost/sysdba,sysdba;
copy data table
(test2.tranhist) (from (test1.tranhist)),
release lock,
file=arctran;


Take partition Backup test1.


  .logon localhost/sysdba,sysdba;
archive data table
(test1.tranhist) (partitions where (!transactiondate between date '2001-01-01' and date '2001-05-01'!)),
release lock,
file=arcpart;

Check data in partition of test2.


select * from test2.tranhist where transactiondate between date '2001-01-01' and date '2001-05-01';

*** Query completed. 11 rows found. 3 columns returned.
 *** Total elapsed time was 1 second.

TransactionID  TransactionDate  TransactionParam1
-------------  ---------------  -----------------
            2       2001-03-04                  2
           14       2001-01-17                  2
            2       2001-03-01                  2
           11       2001-02-04                  2
           11       2001-02-24                  2
           10       2001-02-04                  2
           10       2001-02-14                  2
            9       2001-03-04                  2
            9       2001-03-14                  2
           12       2001-03-04                  2
           12       2001-03-24                  2


deleted partition record from test2.


delete from test2.tranhist where transactiondate between date '2001-01-01' and date '2001-05-01';

*** Delete completed. 11 rows removed.
 *** Total elapsed time was 1 second.

Copy Partition data from backup to test2.


copy data table
(test2.tranhist)(from (test1.tranhist), partitions where (!transactiondate between date '2001-01-01' and date '2001-05-01'!)),
release lock,
file=arcpart;


Check data in partition of test2.


*** Query completed. 11 rows found. 3 columns returned.
 *** Total elapsed time was 1 second.

TransactionID  TransactionDate  TransactionParam1
-------------  ---------------  -----------------
            2       2001-03-04                  2
           14       2001-01-17                  2
            2       2001-03-01                  2
           11       2001-02-04                  2
           11       2001-02-24                  2
           10       2001-02-04                  2
           10       2001-02-14                  2
            9       2001-03-04                  2
            9       2001-03-14                  2
           12       2001-03-04                  2
           12       2001-03-24                  2


 
Potential Data Risks When Archiving/Restoring Selected Partitions:

Be careful when archiving partitioned tables: a number of undesirable conditions can occur.
For additional issues that might occur during restore operations.

Caution: The following cases generally do not display an error or give any indication that a problem has
occurred. In most instances, the only indication is that data is incorrect or is missing from a
table.
• Use Correct Specifications—The incorrect use of specifications causes the following
problems:
An incorrect PARTITIONS WHERE specification during backup can result in an
incomplete archive or difficulties during a restore operation.
An incorrect PARTITIONS WHERE or ALL PARTITIONS specification during restore
can result in data lost from a table or the restoration of stale data to a table if the
archive being restored contains partial, incomplete, or stale versions of an already
existing partition.
• Restrict Updates to Active Partitions—It is not possible to determine which partitions
have been modified since the last backup. If changed partitions are not re-archived, the
changes are lost when restored.
For example, if the following scenarios exist for a table:
The backup strategy is to back up only the active (latest) partition of the table.
A change is made to a non-active partition (to fix an incorrect update).
the only way to archive the change is to archive the changed partitions separately.
The remedy for this situation is to restrict updates to the active partitions only (by using
views to control which rows/partitions are updated) or to re-archive all modified
partitions.
• Do Not Change Values of Functions or Variables—If a built-in SQL function or variable
is used in the PARTITIONS WHERE condition, and the value of the function or variable
changes during the job, a different set of partitions might be archived (or restored) for
some objects in that single archive.
For example, if an archive job uses the CURRENT_DATE built-in function to determine
which is the active partition, and the backup runs past midnight, the date change causes a
different partition to be selected. This means that objects archived after midnight will
archive the new (and probably empty) partition.
The remedy for this situation is to do one of the following:
Avoid using a changing function or variable in the PARTITIONS WHERE condition.
Run the backup at a time when the value will not change.
Modify the PARTITIONS WHERE condition to take the value change into account
when selecting partitions. For example, define a range, such as ‘BETWEEN
CURRENT_DATE – n AND_CURRENT_DATE’ to archive the active partition even if the
date changes.
• Always Specify PARTITIONS WHERE or ALL PARTITIONS—If PARTITIONS WHERE
or ALL PARTITIONS are not specified for a RESTORE or COPY operation, the default
action is to overwrite the entire table with the archived table definition and data.
Essentially, this is the same as a full-table restore.
For example, if PARTITIONS WHERE is omitted when restoring a single-partition
backup, data is dropped from the table and the single partition stored on the archive is
restored.
To solve this problem, always specify PARTITIONS WHERE or ALL PARTITIONS when
restoring partitions into an existing table. Otherwise, the existing table will be overwritten.
• Know What Partitions are Being Deleted—In a RESTORE or COPY operation, all
partitions that match the PARTITIONS WHERE condition are deleted, even if they are not
stored on the archive.
For example, if an archive is restored that:
Contains the data for April 2007
Has a PARTITIONS WHERE condition that matches both March and April 2007
the data for March and April 2007 are deleted, and only April 2007 is restored.
Therefore, be careful when using PARTITONS WHERE. If there is any doubt about which
partitions are affected, COPY the selected partition backup to a staging table, and
manually copy the desired partition(s) into the target table using INSERT ... SELECT and/
or DELETE.
• Avoid Restoring From a Previous Partitioning Scheme—When changing the
partitioning expression for a table, changing the boundaries of existing partitions is
feasible. If these partitions are restored, Teradata might drop more data than expected or
restore less data than expected, if the archive does not include data for all of the selected
partitions.
For example, if an archive is done on a table partitioned by month with the archive data
corresponding to March 2004, and the table is re-partitioned by week, then a PPI restore of
the March backup (using ALL PARTITIONS) overwrites the data for all weeks that contain
at least one day in March. As a result, the last few days of February and the first few days of
April might be deleted and not restored.
Therefore, avoid restoring partition backups from a previous partitioning scheme to an
updated table. Or, use LOG WHERE for the weeks that contain days in both March and
February/April, and manually copy the rows into the table.
• Track the Partitions in Each Archive—Manual steps are required to determine which
partitions are archived by a given backup job, or to determine which backup job has the
latest version of a given partition. ANALYZE displays the Teradata-generated bounding
condition that defines the archived partitions. (This differs from a user-entered condition
that might only qualify partial partitions.) In this case, inconsistent or old data might be
restored to the table if the wrong archive is restored for a partition, or if partition-level
archives are restored out-of-order and the archives contain an overlapping set of
partitions.
For example, updated data is lost in the following situation. Assume that a final backup for
a March 2007 partition is performed on April 1, 2007. On April 5, a mistake is found in a
row dated March 16, so the row is updated, and a new backup of the March partition is
done. If, for instance, the table is accidentally deleted a month later, and an attempt is
made to restore the April 1 backup instead of the April 5 backup, the updated data is lost.
To determine the partitions in each archive, keep track of the partition contents of each
archived table, retain the output listing associated with a tape, or run ANALYZE jobs on
archives.