ORA-01400: cannot insert NULL into (“SYS”.”tablenamr”.”columnname”)

I created a table with PRIMARY KEY CONSTRAINT on a column. I tried inserting value into the table leaving the primary key column value as null. I got this error.
SQL> create table employee(
2 employee_id int primary key,
3 manager_id int,
4 department_id int,
5 first_name varchar2(20),
6 last_name varchar2(20),
7 email varchar2(30),
8 phone_number number(20),
9 hire_date date,
10 job_id int,
11 salary number,
12 commission_pct number);
Table created.
SQL> select * from employee;
no rows selected
SQL> insert into employee(first_name) values (‘learnersreference’);
insert into employee(first_name) values (‘learnersreference’)
*
ERROR at line 1:
ORA-01400: cannot insert NULL into (“SYS”.”EMPLOYEE”.”EMPLOYEE_ID”)
This issue can be fixed in two ways :
1) Insert value into the Primary Key Column
SQL> insert into employee(employee_id,first_name) values (1,’learnersreference’);
1 row created.
2) Drop the Primary Key Constraint
SQL> alter table employee drop primary key;
Table altered.
SQL> insert into employee(first_name) values (‘learnersreference’);
1 row created.

Free Oracle Database Articles, Tips, Jobs :

Delivered by FeedBurner

AWS cloud support engineer interview questions

AWS is an Amazon company with lots of openings for fresh talent, open to fresh ideas, innovation. Amazon web services the cloud based service that has migrated infrastructure from physical data center onto online cloud has been hiring engineers in various capacity including cloud support associate, cloud engineer, senior cloud support engineer, cloud architect, support manager etc. As a fresh graduate out of college this is a lucrative better career option you can eye on. Here we have proposed some interview questions that will help you crack the AWS interview including aws cloud support engineer interview questions. The interview questions does overlap with AWS cloud support associate, AWS cloud support engineer, AWS cloud architect as all these positions demand good knowledge, skill and expertise in Linux/UNIX OS,networking basisc to start with.
Note that these are not actual interview questions and this is just an aid prepared based on AWS stack analysis, job role responsibilities advertised by them in popular websites
These are some of the interview questions that you can expect during interview of AWS cloud support engineer, aws cloud support associate, AWS cloud support manager. We have analysed the technology stack, current job openings and created this based on them. These are not actual interview questions and has nothing to do with them
1) Why should we consider AWS? How would you convince a customer to start using AWS?
Primary advantage is going to be cost savings. As a customer support engineer your job role involves talking to current customers, prospective customers to help them determine if they really have to move to AWS from their current infrastructure. In addition to providing convincing answer in terms of cost savings it would be better if you give them real simple explanation of flexibility, elastic capacity planning that offers option of pay as you use infrastructure, easy to manage AWS console etc
2) What is your current job profile?How would you add value to customer?
Though AWS is looking to hire fresh talent for cloud support engineer openings, if you have some work experience in infrastructure side of business say system administrator, network administrator, database administrator, firewall administrator, security administrator, storage administrator etc you are still a candidate to be considered for interview.
All they are looking for is infrastructure knowledge in overall, little knowledge about different tech stack , how they inter-operate, what will it be like once the infrastructure is in web rather than physical data center.
If you don’t have experience with AWS don’t worry. Try to leverage the ways and means you did adopt to solve customer support calls both internal and external to let them know how you can bring value to the table.
Have some overview on how different components of infrastructure interact.
AWS wants to know your pro-active measure towards customer relationship. Say, if you are going to discuss a project or an issue with customer , it would be better if you have some preparatory work that comes handy rather than being reactive. Value addition comes in terms of recommending the best solution , utilization of services in AWS that will help them make decision easy and fast
3) Do you know networking?
Make sure you can be from many different backgrounds say from development, infrastructure, QA, customer support , network administration, system administration, firewall administration, system administration etc. You should know networking. Cloud is network based and to fix the application issues escalated, networking knowledge is very important
4) What networking commands do you make use of on daily basis to fix issues?
When we work with servers be it physical or virtual first command that comes handy to locate the request response path taken would be traceroute. In windows systems equivalent command is tracert
There are some more important interesting commands – ping, ipconfig, ifconfig that talks about network communication, network address, interface configuration etc
DNS commands – nslookup, Lookup of /etc/resolv.conf file in Linux systems to get details on DNS
5) What is the advantage of using TCP protocol?
TCP is used to exchange data reliably . IT used mechanisms of sequence and acknowledgment, error detection, error recovery. This comes with advantage of reliable application but comes with cost of hit in transmission time
6) What is UDP?
User datagram protocol called UDP is a connectionless protocol that can be used for fast efficient applications that need less time compared to TCP
7) Do you know how an internet works in your environment?
This can be your home or office. Learn more on modem and its utilization in connection establishment
8) What is a process? How do you manage processes in Linux:-
In Linux/Unix based OS process is started or created when a command is issued. In simple terms while a program is running in an OS an instance of the program is created. This is the process. To manage the processes in Linux process maangement commands come handy
ps – this is the commonly used process management command to start with. ps command provides details on currently running active processes
top – This command provides details on all running processes. ps command lists active processes whereas top lists all the processes (i.e) activity of processor in real-time. This includes details on processor and memory being used
kill – To kill a process using the process id kill command is used. ps command provides details on process id. To kill a process issue
kill pid
killall proc – This command is same as kill command. To kill all the processes by name proc we can use this
9) Give details on foreground and background jobs command in Linux:-
fg – this command brings most recent job to foreground. Typing the commadn fg will resume most recently suspended job
fg n – This command brings job n to the foreground. If a job is recently backgrounded by typing fg 1 this can be brought foreground
bg – This command is used to resume a suspended program without bringing it to foreground. This command also provides details on list of stopped jobs as well as current background jobs
10) How to get details on current date and time in Linux?
Make use of the command date that shows details on current date and time. To get current month’s calendar use cal command
uptime – shows current uptime
11) What is difference between command df and du?
In linux both df and du are space related commands showing system space information
df – this command provides details on disk usage
du – To get details on directory space usage use this command
free – this command shows details on memory and swap usage
12) What are the different commands and options to compress files in Linux?
Lets start with creating a tar and name it test.tar containing the needed files
tar cf test.tar files
Once the tar is available, uploaded on AWS there is a need to untar the files. Use the command as follows:
tar xf file.tar
We can create a tar with gzip compression that will minimize the size of files to be transferred and creates test.tar.gz at the end
tar czf test.tar.gz files
To extract the gzipped tar compressed files use the command:
tar xzf test.tar.gz
Bzip2 compression can be used to create a tar as follows
tar cjf test.tar.bz2
To extract bzip2 compressed files use
tar xjf test.tar.bz2
To simply make use of gzip compression use
gzip testfile – This creates testfile.gz
To decompress testfile.gz use gzip -d testfile.gz
13) Give examples on some common networking commands you have made use of?
Note that AWS stack is primarily dependent on linux and over the cloud architecture makes it heavily network dependent. As a result AWS interview could be related to networking irrespective of your system admin, database admin, bigdata admin background. Learn these simple networking commands:
When a system is unreachable first step is to ping the host and make sure it is up and running
ping host – This pings the host and output results
Domain related commands as AWS has become preferred hosting for major itnernet based companies, SaaS firms
To get DNS information of the domain use – dig domain
To get whois information on domain use – whois domain
Host reverse lookup – dig -x host
Download file – wget file
To continue stopped download – wget -c file
14) What is your understanding of SSH?
SSH the secure shell is widely used for safe communication. This is a cryptographic network protocol used for operating network services securely over an unsecured network. Some of the commonly used ssh commands include
To connect to a host as a specified user using ssh use this command:
ssh username@hostname
To connect to a host on a specified port make use of this command
ssh -p portnumber username@hostname
To enable a keyed or passwordless login into specified host using ssh use
ssh-copy-id username@hostname
15) How do you perform search in Linux environment?
Searching and pattern matching are some common functions that typically happens in Linux environment. Here are the Linux commands:
grep – Grep command is the first and foremost when it comes to searching for files with pattern. Here is the usage:
grep pattern_match test_file – This will search for pattern_match in test_file
Search for pattern in directory that has set of files using recursive option as follows – grep -r pattern dir – Searches for pattern in directory recursively
Pattern can be searched in concatenation with another command (i.e) output of a command can be used as input for pattern search and amtch – firstcommand| grep pattern
To find all instances of a file use locate command – locate file
16) Give details on some user related commands in Linux:-
Here are some user related Linux commands:
w – displays details on who is online
whoami – to know whom you are logged in as
finger user – displays information about the user
17) How to get details on kernel information in Linux?
uname -a command provides details on kernel information
18) How to get CPU nd memory info in Linux machine?
Issue the following commands:
cat /proc/cpuinfo for cpu information
cat /proc/meminfo for memory information
19) What are the file system heirarchy related commands in linux?
File system hierarchi starting with raw disks, the way disks are formatted into files, files are grouped together as directory all are important for cracking AWS interview. Here are some file system hierarchy related commands that come handy
touch filename – creates a file with name filename. This command can also be used to update a file
ls- lists the directories
ls -al – All files including hidden files are listed with proper formatting
cd dir – change to specified directory
cd – Changes to home directory
pwd – called present working directory that shows details on current directory
Make a new directory using mkdir command as follows – mkdir directory_name
REmove file using rm commadn – rm file – removes file
To delete directory use -r option – rm -r directory_name
Remove a file forcefully using -f option – rm -f filename
To force remove directory use – rm -rd directory_name
Copy the contents from one file to another – copy file1 file2
Copy the contents across directory use – cp -r dir1 new_dir – If new directory does not exist create this first before issuing copy command
Move or rename a file using mv command – mv file1 new_File
If new_Dir is a file that already exists new_File will be directory into which file1 will be moved into
more filename – output the contents of the file
head file – output the first 10 lines of the file
tail file – output the last 10 lines of the file
tail -f filename – output the contents of the file as it grows, to start with display last 10 lines
Create symbolic link to a file using ln command – ln -s file link – called soft link
20) What commadn is used for displaying manual of a comamnd?
Make use of the command man command_name
21) Give details on app related comamnds in linux:-
which app – shows details on which app will be run by default
whereis app – shows possible locations of application
22) What are the default port numbers of http and https?
Questions on http and https port number is first step in launching webapp while customer reports an issue
Default port number of http is 80 (or) 8080
Default port number of https is 443
23) What is use of load balancer?
Load balancer is used to increase the capacity and reliability of applications. What capacity means is number of users connecting to applications. Load balancer distributes the traffic network and application traffic across many different servers increasing application capacity
24) What is sysprep tool?
System preparation tool comes as free tool with windows that can be accessed from %systemroot%\system32\sysprep folder. IT is used to duplicate, test and deliver new installation of windows based on established installation
25) User is nto able to RDP into server. What could be the reason?
Probable reason is that user is not part of remote desktop users local group of the terminal servers
26) How would you approach a customer issue?
Most work of AWS support engineer involves dealing with customer issue.As with any other support engineer AWS engineer should follow approach of question customer, listen to them, confirm what you have collected. This is called QLC approach much needed step to cover details on issue description
and confirm it
27) What types of questions can you ask customer?
A support engineer can ask two types of questions
1) Open ended questions – In this case your question will be single statement, answer you expect from customer is detailed
2) Closed questions – In this case your question will have answers yea (or) No, true (or) false type answers, single word answer in some cases
28) How do you consider customer from AWS technology perspective?
Even though the customer can be long standing customer of AWS, always think of customer as common man with no knowledge of AWS to talk more to them, explain more details to them to get correct issue description statement
29) Give details on operators in linux?
> – greater than symbol is input re-direction operator used to write content as input into file. Typically this is used to redirect the output of command into logfile. IF file already exists the contents are overwritten and only last recent content is retained
>> – this is same as input redirection except that this is appending content of a file if the file already exists
30) Explain difference between hardlink and softline in simple terms?
Hardlink is link to inode that talks about file contents, softlink is link to filename. If filename changes the changes are not reflected. For both hard and soft link ln command is used. In case of hardlink it will be simply ln, for soft link ln -s option is used
31) What are some common linux commands AWS engineer should be aware of?
1) cat – This is plain simple command to access a file in UNIX
2) ls – Provides details on list of files and directories
3) ps – The process command provides details on list of processes in the system
4) vmstat – Virtual memory statistics comes handy during performance tuning
5) iostat – Command to determine I/O issues
6) top – This command provides details on top resource consuming processes
7) sar – This is a UNIX utility mainly used for tuning purpose
8) rm – This command is used to remove files
9) mv – moving the files and directories
cd – Enables us to change directories
date – gives us the time and date
echo – we can display text on our screen
grep – It is a pattern recognition command.It enables us to see if a certain word or set of words occur in a file or the output of any other command.
history – gives us the commands entered previously by us or by other users
passwd – this command enables us to change our password
pwd – to find out our present working directory or to simply confirm our current location in the file system
uname – gives all details of the system when used with options. We get details including systemname,kernel version etc.
whereis – gives us exact location of the executable file for the utility in the question
which – the command enables us to find out which version(of possibly multiple versions)of the command the shell is using
who – this command provides us with a list of all the users currently logged into the system
whoami – this command indicates who you are logged in as. If user logs in as a userA and does a su to userB,whoami displays userA as the output.
man – this command will display a great detail of information about the command in the question
find – this command gives us the location of the file in a given path
more – this command shows the contents of a file,one screen at a time
ps – this command gives the list of all processes currently running on our system
cat – this command lets us to read a file
vi – this is referred to as text editor that enables us to read a file and write to it
emacs- this is a text editor that enables us to read a file and write to it
gedit – this editor enables us to read a file and write to it
diff – this command compares the two files, returns the lines that are different,and tells us how to make the files the same
export – we can make the variable value available to the child process by exporting the variable.This command is valid in bash,ksh.
setenv – this is same as export command and used in csh,tcsh
env – to display the set of environment variables at the prompt
echo <$variablsname> – displays the current value of the variable
source – whenever an environment variable is changed, we need to export the changes.source command is used to put the environment variable changes into immediate effect.It is used in csh,tcsh
.profile – in ksh,bash use . .profile command to get same result as using source command
set noclobber – to avoid accidental overwriting of an existing file when we redirect output to a file.It is a good idea to include this command in a shell-startup file such as .cshrc
32) What are the considerations while creating username/user logins for Security Administration purpose?
It is a good practice to follow certain rules while creating usernames/user logins
1) User name/user login must be unique
2) User name/user login must contain a combination of 2 to 32 letters, numerals, underscores(_),hyphens(-), periods(.)
3) There should not be any spaces/tab spaces while creating user name/usr logins
4) User name must begin with a letter and must have atleast one lowercase letter
5) Username must be between three to eight characters long
6) It is a best practice to have alphanumeric user names/user logins. It can be a combination of lower case letters, upper case letters, numerals, punctuations
33) Give details on /etc/profile the system profile file and its usage in linux environment:-
.This is another important UNIX system administration file. This file has much to do with user administration. This file is run when we first log into the system.This is system profile file. After this user profile file is run. User profile is the file wherein we define the users environment details.Following are teh different forms of user profile files :
.profile
.bash_profile
.login
.cshrc
/home/username is the default home directory.User’s profile file resides in the user’s home directory.
34) How to perform core file configuration in Linux environment?
Lets consider a linux flavor say solaris. Core File Configuration involves the following steps. We need to follow the steps given below to configure the core file.
1) As a root user, use the coreadm command to display the current coreadm configuration :
# coreadm
2) As a root user, issue the following command to change the core file setup :
# coreadm -i /cores/core_new.%n.%f
3) Run the coreadm command afain to verify that the changes has been made permanent
# coreadm
The O/P line “init core file pattern : ” will reflect the new changes made to the corefile configuration.
From solaris 10 onwards, coreadm process is configured by the Service Management Facility (SMF) at system boot time.We can use svcs command to check the status .The service name for coreadm process is :
svc:/system/coreadm:default
35) How do you configure or help with customer printer configuration?
Administering Printers details the steps needed to administer a printer.
Once the printer server and printer client is set up, we may need to perform the following administrative tasks frequently :
1) Check the status of printers
2) Restart the print scheduler
3) Delete remote printer access
4) Delete a printer
36) How is zombie process recognized in linux and its flavors? How do you handle zombie process in linux environment?
Zombie Process in UNIX/LINUX/Sun Solaris/IBM AIX is recognized by the state Z.It doesn’t use CPU resources.It still uses space in the process table.
It is a dead process whose parent did not clean up after it and it is still occupying space in the process table.
They are defunct processes that are automatically removed when a system reboots.
Keeping OS and applications up to date and with latest patches prevents zombie processes.
By properly using wait() call in parent process will prevent zombie processes.
SIGCHLD is teh signal sent by child to parent upon task completion and parent kills child(proper termination).
kill -18 PID – Kills childs process
37) What is the use of /etc/ftpd/ftpusers in Linux?
/etc/ftpd/ftpusers is used to restrict users who can use FTP(File Transfer Protocol).Ftp is a security threat as passwor is not encrypted while using ftp. ftp must not be used by sensitive user accounts such as root,snmp,uucp,bin,admin(default system user accounts).
As a security measure we have a file called /etc/ftpd/ftpusers created by default. The users listed in this file are not allowed to do ftp.The ftp server in.ftpd reads this file before allowing users to perform ftp. If we want to restrict a user from doing ftp we may have to include their name in this file.
38) Have you ever helped a customer restore a root file system in their environment?
Restoring root file system (/)  provides steps we need to follow to restore the root file system (/ system) in SPARC and x86 (intel) machines.
1) Log in as root user. It is a security practice to login as normal user and perform an su to take root user (super user) role.
2) Appearance of # prompt is an indication that the user is root
3) Use who -a command to get information about current user
4) When / (root filesystem) is lost because of disk failure. In this case we boot from CD or from the network.
5) Add a new system disk to the system on which we want to restore the root (/) file system
6) Create a file system using the command :
newfs /dev/rdsk/partitionname
7) Check the new file system with the fsck command :
fsck /dev/rdsk/partitionname
8) Mount the filesystem on a temporary mount point :
mount /dev/dsk/devicename /mnt
9) Change to the mount directory :
cd /mnt
10) Write protect the tape so that we can’t accidentally overwrite it. This is an optional but important step
11) Restore the root file system (/) by loading the first volume of the appropriate dump level tape into the tape drive. The appropriate dump level is the lowest dump level of all the tapes that need to be restored. Use the following command :
ufsrestore -rf /dev/rmt/n
12) Remove the tape and repeat the step 11 if there is more than one tape for the same level
13) Repeat teh step 11 and 12 with next ddump levels. Always begin with the lowest dump level and use highest ump level tape
14) Verify that file system has been restored :
la
15) Delete the restoresymtable file which is created and used by the ufsrestore utility :
rm restoresymtable
16) Change to the root directory (/) and unmount the newly restored file system
cd /
umount /mnt
17) Check the newly restored file system for consistency :
fsck /dev/rdsk/devicename
18) Create the boot blocks to restore the root file system :
installboot /usr/platform/sun4u/lib/fs/ufs/bootblk /dev/rdsk/devicename — SPARC system
installboot /usr/platform/`uname -i`/lib/fs/ufs/pboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/devicename — x86 system
19) Remove the last backup tape, and insert a new tape onto which we can write. Make a dump level 0 backup of the newly restored system by issuing the following command :
ufsdump 0ucf /dev/rmt/n /dev/rdsk/deviceName
This step is needed because ufsrestore repositions the files and changes the inode allocations – the old backup will not truly represent the newly restored file system
20) Reboot the system :
#reboot (or)
# init 6
System gets rebooted and newly restored file systems are ready to be used.
21) What is the monitoring and reporting tool that comes as part of AWS console?
Cloudwatch the tool listed under management section of AWS console helps with monitoring and reporting emtrics in AWS environment. Following metrics can be monitored as part of cloudwatch including
1) CPU
2) Disk utilization
3) Network
4) Status Check
In addition to the above mentioned metrics RAM the custom metric can be monitored using cloudwatch
22) Give details on status check in cloudwatch?
In an AWS environment status of both instance and system needs to be monitored. As such there are system status check as well as instance status check sections associated with each and every EC2 instance. As the name implies system status check makes sure that physical machines on which the instances have been hosted is in good shape. Instance status check is at the EC2 instance which literally translates to virtual machine in AWS environment
23) What happens if a failure is reported in status check section of AWS?
Depending on what type of failure has been reported following actions can be taken:
In case of system failure – Restart the virtual machine. In AWS terms, restart the EC2 instance. This will automatically bring up the virtual machine in a physical hardware that is issue free
Instance Failure – Depending on the type of failure reported in EC2 instance this can be stopping and starting of virtual machine to ix the issue. In case of disk failure appropriate action can be taken at operating system level to fix the issues
24) What is an EC2 instance in AWS?
This is the basic component of AWS infrastructure. EC2 translates to Elastic compute cloud. In real-time this is a pre-built virtual machine template hosted in AWS that can be chosen, customized to fit the application needs
This is the prime AWS service that eliminates a business necessity to own a data center to maintain their servers, hosts etc
25) What is an ephemeral storage?
An ephemeral storage is a storage that is temporary (or) non-persistent
26) What is the difference between instance and system status check in cloudwatch?
An instance status check checks the EC2 instance in an AWS environment whereas system status check checks the host
27) What is the meaning of EBS volume status check warning?
An EBS volume is degraded or severly degraded. Hence, a warning in an EBS environment is something that can’t be ignored as with other systems
28) What is the use of replicalag metric in AWS?
Replicalag is a metric used to monitor the lag between primary RDS the relational database service a database equivalent in AWS environment and the read replica the secondary database system that is in read only mode
29) What is the minimum granularity level that a cloudwatch can monitor?
Minimum granularity that cloudwatch can monitor is 1 minute. In most real-time cases 5 minute metric monitoring is configured
30) What is the meaning of ebs volume impaired ?
EBS volume impaired means that the volume is stalled or not available
31) Where is ELB latency reported?
In cloudwatch the latency reported by elastic load balancer ELB is available
32) What is included in EC2 instance launch log?
Once the EC2 instance is created, configured and launched following details are recorded in instance launch log:
Creating security groups – The result needs to be Successful. In case of issues the status will be different
Authorizing inbound rules – For proper authorization this should show Successful
Initiating launches – Again this has to be Successful
At the end we see a message that says Launch initiation complete
33) What will happen once an EC2 instance is launched?
After the EC2 instance has been launched it will be in running state. Once an instance is in running state it is ready for use. At this point usage hours which is typically billable resource usage starts. This keeps accruing until we stop or terminate our instance. NExt immediate step is to view instance

Free Oracle Database Articles, Tips, Jobs :

Delivered by FeedBurner

ORA-02095: specified initialization parameter cannot be modified

Oracle database is fundamentally built on top of system memory structures called oracle instance and physical files often referred to as oracle database. The oracle instance creation upon system start, memory allocation happens by mentioning the values in a file called initialization parameter file. Initialization parameters are the key drivers in determining the size of database, instance allocation to name a few. All the initialization parameters are not the same. Some are static in nature that cant be changed and some are dynamic. Parameters must be changed with proper SCOPE specification in alter statement. ORA-02095: specified initialization parameter cannot be modified is often associated with initialization parameters that can’t be modified
I tried creating multiple copies of control file. I got this error at the point where I tried altering the control_files initialization parameter. The issue got fixed when I specified scope=spfile
SQL> alter system set control_files=”C:\APP\USERNAME\ORADATA\ORASID\CONTROL01.CTL”,”C:\APP\USERNAME\ORADATA\ORASID\CONTROL02.CTL”,”C:\APP\USERNAME\ORADATA\ORASID\CONTROL03.CTL”;
ERROR at line 1:
ORA-02095: specified initialization parameter cannot be modified
SQL>alter system set control_files=”C:\APP\USERNAME\ORADATA\ORASID\CONTROL01.CTL”,”C:\APP\USERNAME\ORADATA\ORASID\CONTROL02.CTL”,”C:\APP\USERNAME\ORADATA\ORASID\CONTROL03.CTL” scope=spfile;
System altered.
Case #2:
I got this error when I tried changing an initialization parameter. I thought that there is some problem with the initialization parameter. But it is a different problem. When we specify SCOPE=SPFILE the problem got fixed.
SQL> alter system set undo_management=’manual’;
alter system set undo_management=’manual’ *
ERROR at line 1:
ORA-02095: specified initialization parameter cannot be modified
SQL> alter system set undo_management=’manual’ scope=spfile;
System altered.
Case #3 :
Initialization parameters are the key drivers which drive the operation and performance of Oracle database instance. Certain parameters can be modified dynamically – changed while instance is up and running(dynamic initialization parameter). While few parameters like log_buffer are static initialization parameters that can’t be changed.
SQL> alter system set log_buffer=32M scope=both;
alter system set log_buffer=32M scope=both
*
ERROR at line 1:
ORA-02095: specified initialization parameter cannot be modified
The above command has a class SCOPE. Scope is same as english word scope which in simple terms means range. This clause can take possible values SPFILE,MEMORY,BOTH.

Free Oracle Database Articles, Tips, Jobs :

Delivered by FeedBurner

RMAN Tablespace Backup Oracle Database

Before making backup of a tablespace we can list the backup and make sure that the backup doesn’t exist. On confirming we can make a backup of the tablespace using the BACKUP TABLESPACE command.
1) Connect to target database using rman executable
$ rman target /
2) List the backup of tablespace. This is to make sure that no backup really exist
RMAN> list backup of tablespace sysaux;
using target database control file instead of recovery catalog
specification does not match any backup in the repository
3) Make a backup of the tablespace
RMAN> backup tablespace sysaux;
Starting backup at 25-MAY-10
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=24 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00002 name=C:APPusernameonlinetutelageonlinetutelageSYSAUX01.DBF
channel ORA_DISK_1: starting piece 1 at 25-MAY-10
channel ORA_DISK_1: finished piece 1 at 25-MAY-10
piece handle=C:APPusernameFLASH_RECOVERY_AREAonlinetutelageBACKUPSET2
010_05_25O1_MF_NNNDF_TAG20100525T081331_5ZQHQDLD_.BKP tag=TAG20100525T081331 co
mment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:35
Finished backup at 25-MAY-10
4) Make sure that backup has been made properly by listing the backup
RMAN> list backup of tablespace sysaux;
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
——- —- — ———- ———– ———— —————
1 Full 358.00M DISK 00:00:29 25-MAY-10
BP Key: 1 Status: AVAILABLE Compressed: NO Tag: TAG20100525T081331
Piece Name: C:APPusernameFLASH_RECOVERY_AREAonlinetutelageBACK
UPSET2010_05_25O1_MF_NNNDF_TAG20100525T081331_5ZQHQDLD_.BKP
List of Datafiles in backup set 1
File LV Type Ckp SCN Ckp Time Name
—- — —- ———- ——— —-
2 Full 1058134 25-MAY-10 C:APPusernameonlinetutelageINFOPEDI
AONLINESYSAUX01.DBF

Free Oracle Database Articles, Tips, Jobs :

Delivered by FeedBurner

RMAN-06004: ORACLE error from recovery catalog database: RMAN-20018: database not found in recovery catalog

We’ve seen details on creating a recovery catalog in a database. rman 06004 error happens while recovery catalog is configured to be used with RMAN

It is possible to create virtual catalog which is a special role granted to users to access certain portion of recovery catalog. It’ll be usually read and write access to specific database instances in the recovery catalog. Follow the steps and create the user to whom you want to grant virtual private catalog access and grant him RECOVERY_CATALOG_OWNER privilege

SQL> create user virtual_rman identified by virtual;
User created.
SQL> alter user virtual_rman temporary tablespace temp;
User altered.
SQL> grant recovery_catalog_owner to virtual_rman;
Grant succeeded.
Now connect to RMAN recovery catalog as rman user created previously. Grant virtual private catalog role to user virtual_rman

RMAN> connect catalog rman/password
connected to recovery catalog database
RMAN> grant catalog for database databasename to virtual_rman;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-06004: ORACLE error from recovery catalog database: RMAN-20018: database not found in
recovery catalog
This is because target database is not registered with recovery catalog. Register it first, grant the role. It will succeed
RMAN> connect target /
connected to target database: dbname (DBID=34541)
RMAN> register database;
database registered in recovery catalog
starting full resync of recovery catalog
full resync complete
RMAN> report schema;
Report of database schema for database with db_unique_name value
RMAN>  grant catalog for database databasename to virtual_rman;
Grant succeeded.
RMAN> connect catalog virtual_rman/virtual_rman@dbname
Now user virtual_rman connects to base recovery catalog and creates virtual private catalog
RMAN> connect catalog virtual_rman/virtual_rman@dbname;
connected to recovery catalog database
RMAN> create virtual catalog;
found eligible base catalog owned by RMAN
created virtual catalog against base catalog owned by RMAN

Free Oracle Database Articles, Tips, Jobs :

Delivered by FeedBurner

ora-00031 session marked for kill

Today I got a customer call to fix their scheduler job that has been running so long but doing nothing. I found the details of the scheduler and did attempt to kill it during which I got ora-00031 session marked for kill error
As a first step, I did get list of scheduler running jobs as follows:
spool scheduler_running_jobs.html
set markup html on
select * from dba_scheduler_jobs;
spool off
set markup html off
Based on the information from the above output file, I located the SID of this running job
As a next step using the sid I got details on this job from v$session view as follows
spool session_jobs.html
set markup html on
select * from v$session where sid=’value_from_above_output’;
spool off
set markup html off
As a next step, I issues the following command to kill this scheduler job as follows:
alter system kill session ‘sid,serial#’;
This command did hang for 1 minute (about 60 seconds), and returned the error ora-00031 session marked for kill error
To fix this issue I had to make use of immediate option
alter system kill session ‘sid,serial#’ immediate;
The above command killed the session without any problem

Free Oracle Database Articles, Tips, Jobs :

Delivered by FeedBurner

Interactive SQL Script Oracle SQL

We can choose set of values from a database table by filtering the results using WHERE clause. We can hardcode the filter criteria. We can choose to supply the values at run-time as well. This will make the sql script more interactive. Here is a simple example of interactive SQL script.SQL> select * from department where deptid= &id;

Enter value for id: 30
old   1: select * from department where deptid= &id
new   1: select * from department where deptid= 30
DEPTID
———-
DEPTNAME
——————————————————————————–
30
IT
SQL> select * from department where deptid=30;
DEPTID
———-
DEPTNAME
——————————————————————————–
30
IT
SQL> select * from department where deptid= &id;Enter value for id: 30old   1: select * from department where deptid= &idnew   1: select * from department where deptid= 30
DEPTID———-DEPTNAME——————————————————————————–
30IT
SQL> select * from department where deptid=30;
DEPTID———-DEPTNAME——————————————————————————–
30IT

Free Oracle Database Articles, Tips, Jobs :

Delivered by FeedBurner

Look at alert.log for ora errors

Oracle Database Alert Log File is an essential and important file to track different database activities. Alert.log file is usually name alertdb_name.log,db_name – name of the database. It is a sequence of messages reporting critical activities in an Oracle Database.It is one of the files that can be used in NOMOUNT mode. Other file that can be used in NOMOUNT mode is the parameter file.
What is the use of alert log file ?
1) Capture the major changes and events occuring during the running of the oracle instance
2) Events like log switches, oracle-related errors, warnings and other messages are captured
3) All the initialization parameters are listed in the alert log file during Oracle instance startup
4) Oracle instance startup sequence is listed in the alert log file
5) Tablespace activities including tablespace creation, tablespace alter(datafile addition, resizing) are listed here
These information provide a great help during troubleshooting and comes as first hand source to analyse problems.
Until Oracle 10g alert log destination is specified using the initialization parameter BACKGROUND_DUMP_DEST
If this parameter is not specified, alert log is created in the default location. In most unix systems default location will be :
$ORACLE_HOME/rdbms/log
Alert log will be located inside bdump directory(background dump directory)
How do you find the location of alert log file?
SQL> SHOW PARAMETER background dump;
This query usually displays the background_dump_dest location/path
In oracle database alert log file named alert_SID.log is the place where information on ORA- errors are stored. It is possible to extract information on ORA- errors from alert log.
Here is the UNIX/LINUX command to extract information on ORA errors from alert log. This can be issued command-line. It is also possible to save this detail in the form of script and execute it.
$cat alert_SID.log |grep ORA-|sort|uniq
cat command – opens the file/access the file
grep – match the information
sort – perform ascending sort. This is default
uniq – displays unique detail. Avoids duplicates
Also, we can use the following command
cat alert_SID.log|grep ORA- – Lists oracle errors
Importance of alert.log in security audit:-
An important administrative file that is crucial and plays a major role in identifying problems is the alert_SID.log file present in the location BACKGROUND_DUMP_DEST. If you are not sure about the location of the alert log file named and saved as alert_SID.log then make use of SQL command show parameter background_dump_dest , query v$parameter view etc.
It is interesting to know that alert log records certain steps during oracle database instance  startup process. It records the non-default initialization parameters (i.e) parameters not set to their default values in it. Also whenever ALTER SYSTEM command is issued to make changes to initialization parameters after the initial startup the changes are recorded in alert log. This act as a report during security audit.
It is to be noted that ALTER SYSTEM is used to change initialization parameters dynamically. Only some parameters can be changed with this command. Static initialization parameters can’t be changed with this command.

Free Oracle Database Articles, Tips, Jobs :

Delivered by FeedBurner

RMAN 01009 Connect Right To Oracle RMAN

Oracle RMAN  the most popular oracle recovery manager comes as an additional feature bundled with oracle database software can be invoked by typing the simple rman command. Most of us are used to connecting to oracle database through sql command-line interface sqlplus in the format sqlplus sys/password as sysdba. Now lets see how do we connect to target database using rman. Here is the correct syntax.
RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00558: error encountered while parsing input commands
RMAN-01009: syntax error: found “as”: expecting one of: “newline, ;”
RMAN-01007: at line 1 column 25 file: standard input

RMAN> connect target sys/orcl@orcl as sysdba
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00558: error encountered while parsing input commands
RMAN-01009: syntax error: found “as”: expecting one of: “newline, ;”
RMAN-01007: at line 1 column 30 file: standard input
RMAN> connect target sys/orcl@orcl
connected to target database: ORCL (DBID=1256812001)

Free Oracle Database Articles, Tips, Jobs :

Delivered by FeedBurner