Thursday, 24 September 2020
Cluster Service with and without Domain Parameter
Wednesday, 23 September 2020
Find default ExaCS Storage Server Username and Password
On Exadata Cloud@Customer, the preconfigured user for Exadata Storage Server is cloud_user_clustername, where clustername is the name of the virtual machine (VM) cluster that is being used.
Check Clustername:
[grid@exanode1 ~]$ crsctl get cluster name
CRS-6724: Current cluster name is 'CLUSTER_NAME'
The password for cloud_user_clustername is initially set to a random value
Check default password:
[root@exanode1 ~]# /opt/exacloud/get_cs_data.py --data_file /opt/exacloud/cs_data.enc
********************** PASSWORD
Tuesday, 22 September 2020
bitmap and b-tree indexes
In this article, we'll explain bitmap and b-tree indexes along with an example and corresponding commands for creating each type of index.
B-Tree Index:
- B-tree indexes are widely used in database systems, including Oracle. The "B" in B-tree stands for balanced, indicating that these indexes maintain a balanced tree structure.
- B-tree indexes are most suitable for columns with high cardinality, meaning they have many distinct values. Examples of such columns include primary keys, unique identifiers, and highly selective columns.
- The structure of a B-tree index consists of multiple levels, starting with a root node and extending to leaf nodes. Leaf nodes contain index key values and pointers to the corresponding rows in the table.
- B-tree indexes excel in supporting range searches and equality conditions. They efficiently find a specific value or a range of values within the index, making them beneficial for queries involving comparisons such as greater than, less than, or between.
- Updates and inserts on B-tree indexes can be efficient due to the balanced structure, as the index tree does not require significant restructuring during modifications.
- B-tree indexes work well for queries that retrieve a small percentage of rows from a table, as they provide efficient navigation to the desired data.
Example: Let's assume we have a table called
employees with columns employee_id, first_name, last_name, and we want to
create a B-tree index on the last_name column.
Command to create a B-tree index:
CREATE INDEX idx_employees_last_name ON employees(last_name);
Bitmap Index:
- Bitmap indexes are suitable for columns with low cardinality, meaning they have few distinct values. Examples include boolean fields or columns representing categories or flags.
- Bitmap indexes create a bitmap for each distinct value in the indexed column. A bitmap is a sequence of bits, where each bit represents a row in the table. The bits are set to 1 if the corresponding row contains the indexed value and 0 otherwise.
- Bitmap indexes are efficient for queries involving multiple conditions combined with logical AND or OR operations. They can quickly determine which rows satisfy complex combinations of conditions by performing bitmap operations like AND, OR, and NOT on the bitmaps.
- Bitmap indexes require less storage space compared to B-tree indexes for columns with low cardinality since they represent the presence or absence of a value rather than storing individual pointers.
- Updates and inserts on bitmap indexes can be relatively expensive, especially if the index covers many rows. Modifying a single row may require updating multiple bitmaps.
Example: Let's consider a table called
orders with columns order_id, product_name, order_date, and we want to create a
bitmap index on the product_name column.
Command to create a Bitmap
index:
CREATE BITMAP INDEX idx_orders_product_name ON orders(product_name);
B-tree indexes are well-suited for high cardinality columns and queries
involving range searches and equality conditions. Bitmap indexes are effective
for low cardinality columns and queries with complex combinations of
conditions. It's essential to analyze your data and query patterns to determine
the most appropriate index type for your specific use case.
Monday, 21 September 2020
Partition pruning
Partition pruning is a performance optimization technique used in databases that store data in partitioned tables. Partitioning is a technique used to divide a large table into smaller, more manageable parts called partitions, based on a partition key. Each partition is a separate physical segment of the table, and data is stored in each partition based on the partition key.
Partition pruning is the process of eliminating partitions that do not need to be scanned based on the conditions specified in a SQL query. When a query is executed on a partitioned table, the optimizer checks the WHERE clause of the query to see if it includes a predicate that matches the partition key. If a partition key is used in the WHERE clause, the optimizer can prune partitions that do not need to be scanned, based on the partition key values in the query.
For example, consider a partitioned table that is partitioned by date, with each partition representing a month of data. If a query is executed with a date range in the WHERE clause, the optimizer can prune partitions that fall outside of the date range, and only scan the partitions that contain data within the specified date range. This can significantly reduce the amount of data that needs to be scanned and improve query performance.
Partition pruning is a powerful technique that can significantly improve query performance on partitioned tables. It is important to design partitioning schemes and queries with partition pruning in mind, to ensure that the optimizer can take advantage of this optimization technique.
Wednesday, 16 September 2020
Nested, hash, merge, and cartesian joins
Are you trying to understand the differences between nested, hash and merge joins? Joins are an essential part of working with data and understanding the nuances between each type of join can make a big difference in the accuracy of your queries and the performance of your database. In this blog, we’ll explore the differences between nested, hash and merge joins, and discuss the advantages and drawbacks of each
Nested Join: A nested join is a type of join operation in which a query is executed on a table, and for each row returned by the query, another query is executed on another table. This type of join is also known as a correlated subquery.
Nested joins can be computationally expensive because they involve executing a subquery for each row returned by the outer query. As a result, nested joins can become very slow for large tables or when the subquery returns a large number of rows.
Hash Join: A hash join is a type of join operation in which the database system creates a hash table of one table and then uses that hash table to join it with another table. This type of join is efficient for large tables with no indexes.
Hash joins can be very fast for large tables with no indexes. However, if the tables are too small or if there are many indexes on the tables, the overhead of creating and manipulating hash tables can make hash joins less efficient than other join algorithms.
Merge Join: A merge join is a type of join operation in which the database system sorts two tables based on a common column and then combines them using a merge algorithm. This type of join is efficient for large tables with indexes.
Merge joins can be very efficient for large tables with indexes, as they can take advantage of the pre-sorted order of the tables to avoid expensive sorting operations. However, if the tables are not sorted or if there are no indexes on the join columns, merge joins can be slower than other join algorithms.
Cartesian Join: A cartesian join is a type of join operation in which the database system combines each row from one table with every row from another table, resulting in a large number of rows. This type of join is used when no join condition is specified.
Cartesian joins can be very slow for large tables, as they can generate a very large number of intermediate rows. It is important to ensure that the cartesian join is necessary and that it is not generating unnecessary rows before executing the query.
In summary, the performance of a join operation depends on several factors, including the size of the tables, the selectivity of the join condition, the available indexes, and the characteristics of the join algorithm. It is important to carefully consider these factors when choosing a join algorithm to ensure optimal performance.
Thursday, 10 September 2020
Upload large file to Oracle SR
upload huge logs to oracle SR using curl from db server directly
curl -T "filename" -u "supportloginid" https://transport.oracle.com/upload/issue/SR-Number/
Tuesday, 8 September 2020
failed to start asm on node4 with error CRS-2552: There are no available instances of resource 'ora.asm' to start.
19.7 Grid upgrade: rootupgrade.sh fails on first node
rootupgrade.sh failed with below error while upgrading Grid from 18.3 to 19.7.0.0
Error:
CRS-2676: Start of 'ora.cssdmonitor' on 'node01' succeeded
CRS-1609: This node is unable to communicate with other nodes in the cluster and is going down to preserve cluster integrity;
details at (:CSSNM00086:) in /app/grid/diag/crs/node01/crs/trace/ocssd.trc.
CRS-2883: Resource 'ora.cssd' failed during Clusterware stack start.
CRS-4406: Oracle High Availability Services synchronous start failed.
CRS-41053: checking Oracle Grid Infrastructure for file permission issues
CRS-4000: Command Start failed, or completed with errors. 2020/09/07 09:08:46
CLSRSC-117: (Bad argc for has:clsrsc-117) Died at /u01/app/19.3.0.0/grid/crs/install/crsupgrade.pm line 1617.
We can get deviated with “unable to communicate with other nodes” errors in alert and trace files, Started looking communication between nodes
1. Verified ssh connectivity between nodes , Its working fine
2. Verified ping and traceroute , Looks good
From Node 1:
+ ping -s 9000 -c 4 -I <node1-private address> <node1-private address>
+ ping -s 9000 -c 4 -I <node1-private address> <node2-private address>
+ traceroute -s <node1-private address> -r -F <node1-private address> 8972
+ traceroute -s <node1-private address> -r -F <node2-private address> 8972
From Node 2:
+ ping -s 9000 -c 4 -I <node2-private address> <node1-private address>
+ ping -s 9000 -c 4 -I <node2-private address> <node2-private address>
+ traceroute -s <node2-private address> -r -F <node1-private address> 8972
+ traceroute -s <node2-private address> -r -F <node2-private address> 8972
While checking gipcd.trc found some failed errors:
020-09-07 08:21:27.483 : GIPCTLS:474797824: gipcmodTlsAuthInit: tls context initialized successfully
2020-09-07 08:21:27.524 :GIPCXCPT:474797824: gipcmodTlsLogErr: [NZOS], ssl_Handshake failed to perform operation on handshake with NZERROR [29024]
2020-09-07 08:21:27.524 :GIPCXCPT:474797824: gipcmodTlsAuthStart: ssl_Handshake() failed with nzosErr : 29024, ret gipcretTlsErr (49)
As per bug id 2667217.1, Similar error reported in 19.6 upgrade
Workaround on 19.7:
1) Run rootupgrade.sh on node1
2) When it fails on Node1 with this error, then shutdown crs on node 2
cd <18c_Gridhome/bin>
./crsctl stop crs
3)rerun rootupgrade.sh on node1
Sunday, 6 September 2020
Check CPU Clock Speed on Linux
we can find cpu processor details using below commands
1. lscpu:
It display information about the CPU architecture
# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz
Stepping: 4
CPU MHz: 3374.652
CPU max MHz: 3700.0000
CPU min MHz: 1200.0000
BogoMIPS: 5400.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 33792K
NUMA node0 CPU(s): 0-95
2. dmidecode: print cpu speed
# dmidecode -t processor | grep "Speed"
Max Speed: 4000 MHz
Current Speed: 2700 MHz
Max Speed: 4000 MHz
Current Speed: 2700 MHz
3. Each CPU processor details from /proc/cpuinfo file
cat /proc/cpuinfo
cat /proc/cpuinfo | grep
Saturday, 5 September 2020
Unable to delete archives in Standby db with rman
RMAN-08137: warning: archived log not deleted Error while deleting archive after backup using below command
backup archivelog all format '/backup/$ORACLE_SID/arch/arch_flashblade_%d_%t_%s_%p' DELETE INPUT FILESPERSET 1;
Error:
piece handle=/backup/proddb01/arch/arch_flashblade_proddb0_1089597818_9469_1 tag=TAG20211126T020225 comment=NONE
channel c2: backup set complete, elapsed time: 00:00:51
channel c2: deleting archived log(s)
RMAN-08137: warning: archived log not deleted, needed for standby or upstream capture process
archived log file name=/san/proddb0_fra/1_7456_1085402700.arc thread=1 sequence=7456
Cause:
log_archive_dest_2 on standby which points to primary database was deferred
SQL> show parameter dest_2
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_create_online_log_dest_2 string
log_archive_dest_2 string SERVICE=proddb01 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=proddb01
SQL> show parameter log_archive_dest_state_2
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_state_2 string DEFER
log_archive_dest_state_20 string enable
log_archive_dest_state_21 string enable
log_archive_dest_state_22 string enable
log_archive_dest_state_23 string enable
log_archive_dest_state_24 string enable
log_archive_dest_state_25 string enable
log_archive_dest_state_26 string enable
log_archive_dest_state_27 string enable
log_archive_dest_state_28 string enable
log_archive_dest_state_29 string enable
Solution:
set null values to dest_2 and enable dest_state_2
SQL> alter system set log_archive_dest_2='' scope=both sid='*';
System altered.
SQL> alter system set log_archive_dest_state_2=enable scope=both sid='*';
System altered.
Wednesday, 2 September 2020
How to Install and Upgrade Terraform
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
2. Install terraform
$ brew install terraform
3. Check terraform Version
$ terraform -v