AWS certified solutions architect associate practice exam

AWS certified solutions architect associate exam if the foundation once you want to start using AWS cloud platform. IF you are into infrastructure career say system administrator, network administrator, database administrator, storage administrator upgrade your AWS without any delay. In next 2 years your AWS skills with your experience will be a mandate skill to be hired and retain your current job. AWS certified solutions architect associate practice exam will hep you prepare for the exam. This is not official exam question and answer. We provide this for preparation help only
1) You want to move your documents onto AWS for immediate availability. Which AWS component will you make use of?
a) EC2
b) Cloudfront
c) S3
d) Amazon Glacier
Answer : c
Explanation : Amazon S3 is where we need to upload the document onto for immediate access
2) What is a virtual machine image called in AWS?
a) EC2
b) Cloudwatch
c) Redshift
d) Kinmetrics
Answer : EC2
Explanation : AWS EC2 is the virtual image that is available by default in AWS library. Depending on requirement these virtual machine templates pre-built with proper OS, 32-bit/64-bit version, storage capacity etc makes it a choice to be deployed as part of free tier or on paid basis
3) You have been asked to choose appropriate EBS storage volume that can also act as boot volume for your application with about 3000 IOPS. Which one will you use?
a) HDD
b) SSD
c) Flash drive
d) USB
Answer : b
Explanation : In AWS always SSD can function as boot volume. HDD can’t be a boot volume. Boot volume can be general purpose SSD (or) provisioned IOPS SSD
4) Amazon S3 storage classes can be which of following?
a) Normal
b) standard
c) custom
d) reduced redundancy
Answer : b,d
Explanation : Once an objects gets stored in AWS S3 storage, storage class is assigned to these objects depending on criticality. Default storage class is standard storage
5) Where are thumbnails stored in Amazon S3?
a) Reduced redundancy storage
b) standard storage
c) Elastic cache
d) Amazon glacier
Answer : a
Reduced redundancy storage is used to store easily reproducible thumbnails owing to its cost effectiveness
6) You just uploaded your file onto AWS. You want this upload to trigger an associated job in hadoop ecosystem. Which AWS components can help with this requirement?
a) Amazon S3
b) SMS
c) SQS
d) SNS
e) Ec2
Answer : a,c,d
Explanation: In AWS a file is uploaded onto Amazon S3 bucket. this upload action will send event notifications. The event notifications are delivered by SQS, SNS.The S3 event notification can be directly send to amazon lambda as well. Once the lambda receives event notification in one of these methods it triggers workflows, alerts or other automated processing including start of job
7) What does cloudformation init script does?
a) Fetch and parse metadata from AWS:cloudformation::init key
b) Install packages
c) compress logs
d) Write files to disk
e) Enable/disable services
f) Start (or) stop services
Answer : a,b,d,e,f
Explanation : The cfn-init is the helper script that reads template metadata from the AWS::CloudFormation::Init key and acts accordingly . The AWS::Cloudformation::init key includes metadata on amazon EC2 instance
8) What is the use of AWS cloud formation list-stacks command?
a) 90 days history on all activity on stacks
b) List of all stacks that you have created
c) List of all stacks that you have deleted
d) List of all stacks that you have created or deleted upto 90 days ago
Answer : d
Explanation : list-stack helps us get list of all stacks created or deleted by us in last 90 days. There is a filtering option to filter based on stack status such as CREATE_COMPLETE, DELETE_COMPLETE. Stack information including the name, stack identifier, template, and status of both created, currently running, stacks deleted in last 90 days is available as result of running this command
9) What is amazon SWF?
a) Task management and task coordinator service
b) Storage service
c) Scheduling service
d) Provisioning service
Answer : a
Explanation : Amazon simple workflow service is a state tracker and task coordinator service in cloud

Free AWS Articles, Tips, Jobs :

Delivered by FeedBurner

AWS big data certification

AWS big data certification is a specialty certification from AWS. If you are a database administrator in oracle, sql server, mysql, mongoDB etc it is high time to upgrade yoru skill set to support databases, datawarehouse environments in AWS to retain your jobs

Free AWS Articles, Tips, Jobs :

Delivered by FeedBurner

Bigdata Oracle DBA career significance

Any oracle DBA who are one of the top payers in their department often have sense of insecurity not associated with their performance but owing to the fact that data lifecycle has continuously undergone evolution over the last decade. Say if you were an oracle DBA in early 2000’s you will be expected to learn latest offerings from Oracle like Oracle RAC, dataguard, ASM, shell scripting to automate operational tasks of an Oracle DBA, goldengate etc. If we look at the job profile requirement of any Oracle DBA over the past two years particularly starting 2015 we see that organizations prefer an Oracle DBA who knows big data. Hadoop has essentially become an additional asset skill that Oracle DBA can leverage to find his next best job. Interestingly starting 2017 Oracel DBA’s are expected to manage and maintain normal oracle databases as oracle RDS services, deploy manage and maintain big data in AWS environment
In this post we are providing our opinion on what the future could be like and will it be really beneficial for a DBA to learn hadoop framework the most popular framework that supports bigdata. This is purely our analysis and its upto the readers to make decision
1) Oracle DBA will not be 100% gone – There had always been concerns about if Oracle DBA profession is to be 100% gone ever since the cloud computing came into picture. Hadoop the bigdata framework is mainly supported in EC2 machines that come as service with popular service like AWS, Microsoft Azure virtual machines etc. This does offload the hardware handling, installation of databases and much of a traditional DBA task. However, the information in the cloud needs to be managed. HEnceforth, these cloud companies still need oracle DBA
2) Big data will not wipe out your database business – I myself was wondering if bigdata is going to replace the traditional RDBMS. Based on my personal observation simple answer in No. Bigdata is for businesses to unleash the information needed for their growth , healthcare professionals to model the existing information and predict unknown facts to treat diseases well in advance. At any cost this will not impact a normal RDBMS environment
In real world startups dont want a separate infrastructure team. Instead they rely on cloud and go ahead with third party hosting services like AWS, Azure etc. As such hadoop can be a valuable asset for oracle DBA’s as well as DBA’s in other discipline as your company will prefer you to be part of upcoming bigdata project rather not replace you or will not wipe out your job role anytime soon

Free Oracle Database Articles, Tips, Jobs :

Delivered by FeedBurner

PL/SQL Parameter Types Interesting Facts

A parameter is the value that is used in PL/SQL subprograms(procedures,functions), calling programs(could be anonymous PL/SQL blocks, another subprogram,simple EXECUTE procedure_name(parameter),EXECUTE function_name(parameter)). A parameter in calling function/procedure and called program can be any one of the following three types :
1) IN Parameter – It is passed as a read only value from calling program to called program. If we try to modify this IN parameter value in procedures being called, it gives a compile time error
2) OUT Parameter – OUT parameter is the value passed from calling program. It is modified and given back as output to calling program. We can display the value of OUT parameter using DBMS_OUTPUT.PUT_LINE, iSQL*PLUS host variables also called as global variables(in oracle 11g isql*plus is no longer available), using VARIABLE variable name datatype; EXECUTE procedure_name(parameter);PRINT variable name. The variable name is the OUT parameter value returned and displayed in host environment. We can also use this inside oracle forms, reports, java and c applications
3) IN OUT Parameter – It is both value passed into program, variable returned after processing. This is a common type used in most reporting systems where we need to format the output display. A very good example is phone number which when passed as raw value is say 9874563789. After making changes, the output is going to be say 987-456-3789

Free Oracle Database Articles, Tips, Jobs :

Delivered by FeedBurner

Oracle Table SET UNUSED Column

We can set a column in a table as UNUSED. When we specify this clause the column becomes invisible and inaccessible. It is equivalent to dropped column. This column can be dropped at a later time when there is less resource consumption.We’ve seen that it is not possible to use SET UNUSED clause with the SYS tables
Now we demonstrate the usage of SET UNUSED as a normal user(schema user) :
SQL> connect temp/temp
Connected.
SQL> desc test;
Name                                      Null?    Type
—————————————– ——– ————————
ID                                        NOT NULL NUMBER(38)
NAME                                               VARCHAR2(10)

SQL> insert into test values(1,’info’);
1 row created.
SQL> insert into test values(2,’new’);
1 row created.
SQL> alter table test set unused (id);
Table altered.
SQL> desc test;
Name                                      Null?    Type
—————————————– ——– ————————–
NAME                                               VARCHAR2(10)
DROP UNUSED Column Oracle Table:
If a column in a database table is set as UNUSED it is essential to drop it at a later point of time. This can be done as a bulk UNUSED column drop. It is simple to specify DROP UNUSED COLUMNS clause along with ALTER TABLE. Here is how we go about it :
SQL> alter table test drop unused columns;
Table altered.

Free Oracle Database Articles, Tips, Jobs :

Delivered by FeedBurner

Oracle Data Mining install and start using this product

Oracle data mining is a part of Oracle database enterprise edition.Installation of Oracle data mining involves the following steps:
1) Installing the Oracle database
2) Installing Oracle database companion
3) Create a data mining demo user
4) running the sample programs
Installing the Oracle database:
1) Login as a user with appropriate privileges
2) Install database with sample schemas.Sample schemas are needed for data mining sample programs.
3) Run setup.exe which invokes Oracle universal installer.Click option to create a starter database.DBCA is used to create a starter database.
4) Unlock the accounts SYS,SYSTEM,SH.Change the password for the accounts.
5) Exit the wizard.
Installing Oracle Database Companion :
The Oracle Data Mining sample programs are installed with Oracle Database Companion.
The Database Companion installation process copies the Oracle Data Mining sample programs, along with examples and demonstrations of other database features, to the rdbmsdemo subdirectory of the Oracle home directory.Use Oracle Universal Installer to install it.Run setup.exe program.
Create a Data Mining Demo User:
1)Login as SYS User
2) Create data mining demo user as:
SQL> CREATE USER dmuser IDENTIFIED BY password
DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP
QUOTA UNLIMITED ON USERS;
3)Run dmshgrants.sql to grant access to the SH schema. Several tables in SH are used by the Data Mining sample programs. Specify the Data Mining user name as the parameter.
@ %ORACLE_HOME%rdbmsdemodmshgrants dmuser
4) Connect to the database as the Data Mining user.
CONNECT dmuser
Enter password: password
5) Run dmsh.sql to populate the schema of the Data Mining user with tables, views, and other objects needed by the sample programs.
@ %ORACLE_HOME%rdbmsdemodmsh
COMMIT;
Run the Sample Programs:
To locate the sample programs on computer, navigate to the rdbmsdemo subdirectory under Oracle home.
To display the Data Mining PL/SQL sample programs, search for the files that start with dm and end with .sql. (The list will include dmsh.sql and dmshgrants.sql, which are used to configure the Data Mining demo user ID.) In the same directory, search for the files that start with dm and end with .java to display the Java samples.

Free Oracle Database Articles, Tips, Jobs :

Delivered by FeedBurner

ORA-01403: no data found

I created a table. I left it empty. I created an anonymous block to extract data from the table. I got this error.
SQL> create table employee(
2 employee_id int primary key,
3 manager_id int,
4 department_id int,
5 first_name varchar2(20),
6 last_name varchar2(20),
7 email varchar2(30),
8 phone_number number(20),
9 hire_date date,
10 job_id int,
11 salary number,
12 commission_pct number);
Table created.
SQL> declare fir_name varchar2(20);
2 begin
3 select first_name into fir_name from employee;
4 end;
5 /
declare fir_name varchar2(20);
*
ERROR at line 1:
ORA-01403: no data found
ORA-06512: at line 3
SQL> select * from employee;
no rows selected
SQL> insert into employee(first_name) values (‘learnersreference’);
1 row created.
SQL> select first_name from employee;
FIRST_NAME
——————–
learnersreference
SQL> declare fir_name varchar2(20);
2 begin
3 select first_name into fir_name from employee;
4 end;
5 /
PL/SQL procedure successfully completed.

Free Oracle Database Articles, Tips, Jobs :

Delivered by FeedBurner

ORA-12988: cannot drop column from table owned by SYS

It is not possible to set column of a table as unused. SET UNUSED clause lets us make a column of a table invisible and the column can be dropped at a later point of time. This is a measure towards resource consumption. I tried dropping a table in SYS schema and got this error.
SQL> create table dept(manager_id int primary key, dep_name varchar2(10));
Table created.
SQL> alter table dept set unused (manager_id);
alter table dept set unused (manager_id)
*
ERROR at line 1:
ORA-12988: cannot drop column from table owned by SYS
SQL> alter table dept set unused (dept);
alter table dept set unused (dept)
*
ERROR at line 1:
ORA-12988: cannot drop column from table owned by SYS
How to fix ora-12988 error?
We first create a new object with the same name in schema other than SYS with “AS SELECT * FROM..’. We then log onto that schema. We issue the alter table command and we see it to be a success without any error
SQL> create table prac.sys_object as select * from sys_object;
Table created.
SQL> connect prac/prac
Connected.
SQL> alter table sys_object drop column created;
Table altered.
This fixed the error ORA-12988: cannot drop column from table owned by SYS

Free Oracle Database Articles, Tips, Jobs :

Delivered by FeedBurner