AWS certified solutions architect associate practice exam

AWS certified solutions architect associate exam if the foundation once you want to start using AWS cloud platform. IF you are into infrastructure career say system administrator, network administrator, database administrator, storage administrator upgrade your AWS without any delay. In next 2 years your AWS skills with your experience will be a mandate skill to be hired and retain your current job. AWS certified solutions architect associate practice exam will hep you prepare for the exam. This is not official exam question and answer. We provide this for preparation help only
1) You want to move your documents onto AWS for immediate availability. Which AWS component will you make use of?
a) EC2
b) Cloudfront
c) S3
d) Amazon Glacier
Answer : c
Explanation : Amazon S3 is where we need to upload the document onto for immediate access
2) What is a virtual machine image called in AWS?
a) EC2
b) Cloudwatch
c) Redshift
d) Kinmetrics
Answer : EC2
Explanation : AWS EC2 is the virtual image that is available by default in AWS library. Depending on requirement these virtual machine templates pre-built with proper OS, 32-bit/64-bit version, storage capacity etc makes it a choice to be deployed as part of free tier or on paid basis
3) You have been asked to choose appropriate EBS storage volume that can also act as boot volume for your application with about 3000 IOPS. Which one will you use?
a) HDD
b) SSD
c) Flash drive
d) USB
Answer : b
Explanation : In AWS always SSD can function as boot volume. HDD can’t be a boot volume. Boot volume can be general purpose SSD (or) provisioned IOPS SSD
4) Amazon S3 storage classes can be which of following?
a) Normal
b) standard
c) custom
d) reduced redundancy
Answer : b,d
Explanation : Once an objects gets stored in AWS S3 storage, storage class is assigned to these objects depending on criticality. Default storage class is standard storage
5) Where are thumbnails stored in Amazon S3?
a) Reduced redundancy storage
b) standard storage
c) Elastic cache
d) Amazon glacier
Answer : a
Reduced redundancy storage is used to store easily reproducible thumbnails owing to its cost effectiveness
6) You just uploaded your file onto AWS. You want this upload to trigger an associated job in hadoop ecosystem. Which AWS components can help with this requirement?
a) Amazon S3
b) SMS
c) SQS
d) SNS
e) Ec2
Answer : a,c,d
Explanation: In AWS a file is uploaded onto Amazon S3 bucket. this upload action will send event notifications. The event notifications are delivered by SQS, SNS.The S3 event notification can be directly send to amazon lambda as well. Once the lambda receives event notification in one of these methods it triggers workflows, alerts or other automated processing including start of job
7) What does cloudformation init script does?
a) Fetch and parse metadata from AWS:cloudformation::init key
b) Install packages
c) compress logs
d) Write files to disk
e) Enable/disable services
f) Start (or) stop services
Answer : a,b,d,e,f
Explanation : The cfn-init is the helper script that reads template metadata from the AWS::CloudFormation::Init key and acts accordingly . The AWS::Cloudformation::init key includes metadata on amazon EC2 instance
8) What is the use of AWS cloud formation list-stacks command?
a) 90 days history on all activity on stacks
b) List of all stacks that you have created
c) List of all stacks that you have deleted
d) List of all stacks that you have created or deleted upto 90 days ago
Answer : d
Explanation : list-stack helps us get list of all stacks created or deleted by us in last 90 days. There is a filtering option to filter based on stack status such as CREATE_COMPLETE, DELETE_COMPLETE. Stack information including the name, stack identifier, template, and status of both created, currently running, stacks deleted in last 90 days is available as result of running this command
9) What is amazon SWF?
a) Task management and task coordinator service
b) Storage service
c) Scheduling service
d) Provisioning service
Answer : a
Explanation : Amazon simple workflow service is a state tracker and task coordinator service in cloud

Free AWS Articles, Tips, Jobs :

Delivered by FeedBurner

AWS big data certification

AWS big data certification is a specialty certification from AWS. If you are a database administrator in oracle, sql server, mysql, mongoDB etc it is high time to upgrade yoru skill set to support databases, datawarehouse environments in AWS to retain your jobs

Free AWS Articles, Tips, Jobs :

Delivered by FeedBurner

Bigdata Oracle DBA career significance

Any oracle DBA who are one of the top payers in their department often have sense of insecurity not associated with their performance but owing to the fact that data lifecycle has continuously undergone evolution over the last decade. Say if you were an oracle DBA in early 2000’s you will be expected to learn latest offerings from Oracle like Oracle RAC, dataguard, ASM, shell scripting to automate operational tasks of an Oracle DBA, goldengate etc. If we look at the job profile requirement of any Oracle DBA over the past two years particularly starting 2015 we see that organizations prefer an Oracle DBA who knows big data. Hadoop has essentially become an additional asset skill that Oracle DBA can leverage to find his next best job. Interestingly starting 2017 Oracel DBA’s are expected to manage and maintain normal oracle databases as oracle RDS services, deploy manage and maintain big data in AWS environment
In this post we are providing our opinion on what the future could be like and will it be really beneficial for a DBA to learn hadoop framework the most popular framework that supports bigdata. This is purely our analysis and its upto the readers to make decision
1) Oracle DBA will not be 100% gone – There had always been concerns about if Oracle DBA profession is to be 100% gone ever since the cloud computing came into picture. Hadoop the bigdata framework is mainly supported in EC2 machines that come as service with popular service like AWS, Microsoft Azure virtual machines etc. This does offload the hardware handling, installation of databases and much of a traditional DBA task. However, the information in the cloud needs to be managed. HEnceforth, these cloud companies still need oracle DBA
2) Big data will not wipe out your database business – I myself was wondering if bigdata is going to replace the traditional RDBMS. Based on my personal observation simple answer in No. Bigdata is for businesses to unleash the information needed for their growth , healthcare professionals to model the existing information and predict unknown facts to treat diseases well in advance. At any cost this will not impact a normal RDBMS environment
In real world startups dont want a separate infrastructure team. Instead they rely on cloud and go ahead with third party hosting services like AWS, Azure etc. As such hadoop can be a valuable asset for oracle DBA’s as well as DBA’s in other discipline as your company will prefer you to be part of upcoming bigdata project rather not replace you or will not wipe out your job role anytime soon

Free Oracle Database Articles, Tips, Jobs :

Delivered by FeedBurner

AWS cloud support engineer interview questions

AWS is an Amazon company with lots of openings for fresh talent, open to fresh ideas, innovation. Amazon web services the cloud based service that has migrated infrastructure from physical data center onto online cloud has been hiring engineers in various capacity including cloud support associate, cloud engineer, senior cloud support engineer, cloud architect, support manager etc. As a fresh graduate out of college this is a lucrative better career option you can eye on. Here we have proposed some interview questions that will help you crack the AWS interview including aws cloud support engineer interview questions. The interview questions does overlap with AWS cloud support associate, AWS cloud support engineer, AWS cloud architect as all these positions demand good knowledge, skill and expertise in Linux/UNIX OS,networking basisc to start with.
Note that these are not actual interview questions and this is just an aid prepared based on AWS stack analysis, job role responsibilities advertised by them in popular websites
These are some of the interview questions that you can expect during interview of AWS cloud support engineer, aws cloud support associate, AWS cloud support manager. We have analysed the technology stack, current job openings and created this based on them. These are not actual interview questions and has nothing to do with them
1) Why should we consider AWS? How would you convince a customer to start using AWS?
Primary advantage is going to be cost savings. As a customer support engineer your job role involves talking to current customers, prospective customers to help them determine if they really have to move to AWS from their current infrastructure. In addition to providing convincing answer in terms of cost savings it would be better if you give them real simple explanation of flexibility, elastic capacity planning that offers option of pay as you use infrastructure, easy to manage AWS console etc
2) What is your current job profile?How would you add value to customer?
Though AWS is looking to hire fresh talent for cloud support engineer openings, if you have some work experience in infrastructure side of business say system administrator, network administrator, database administrator, firewall administrator, security administrator, storage administrator etc you are still a candidate to be considered for interview.
All they are looking for is infrastructure knowledge in overall, little knowledge about different tech stack , how they inter-operate, what will it be like once the infrastructure is in web rather than physical data center.
If you don’t have experience with AWS don’t worry. Try to leverage the ways and means you did adopt to solve customer support calls both internal and external to let them know how you can bring value to the table.
Have some overview on how different components of infrastructure interact.
AWS wants to know your pro-active measure towards customer relationship. Say, if you are going to discuss a project or an issue with customer , it would be better if you have some preparatory work that comes handy rather than being reactive. Value addition comes in terms of recommending the best solution , utilization of services in AWS that will help them make decision easy and fast
3) Do you know networking?
Make sure you can be from many different backgrounds say from development, infrastructure, QA, customer support , network administration, system administration, firewall administration, system administration etc. You should know networking. Cloud is network based and to fix the application issues escalated, networking knowledge is very important
4) What networking commands do you make use of on daily basis to fix issues?
When we work with servers be it physical or virtual first command that comes handy to locate the request response path taken would be traceroute. In windows systems equivalent command is tracert
There are some more important interesting commands – ping, ipconfig, ifconfig that talks about network communication, network address, interface configuration etc
DNS commands – nslookup, Lookup of /etc/resolv.conf file in Linux systems to get details on DNS
5) What is the advantage of using TCP protocol?
TCP is used to exchange data reliably . IT used mechanisms of sequence and acknowledgment, error detection, error recovery. This comes with advantage of reliable application but comes with cost of hit in transmission time
6) What is UDP?
User datagram protocol called UDP is a connectionless protocol that can be used for fast efficient applications that need less time compared to TCP
7) Do you know how an internet works in your environment?
This can be your home or office. Learn more on modem and its utilization in connection establishment
8) What is a process? How do you manage processes in Linux:-
In Linux/Unix based OS process is started or created when a command is issued. In simple terms while a program is running in an OS an instance of the program is created. This is the process. To manage the processes in Linux process maangement commands come handy
ps – this is the commonly used process management command to start with. ps command provides details on currently running active processes
top – This command provides details on all running processes. ps command lists active processes whereas top lists all the processes (i.e) activity of processor in real-time. This includes details on processor and memory being used
kill – To kill a process using the process id kill command is used. ps command provides details on process id. To kill a process issue
kill pid
killall proc – This command is same as kill command. To kill all the processes by name proc we can use this
9) Give details on foreground and background jobs command in Linux:-
fg – this command brings most recent job to foreground. Typing the commadn fg will resume most recently suspended job
fg n – This command brings job n to the foreground. If a job is recently backgrounded by typing fg 1 this can be brought foreground
bg – This command is used to resume a suspended program without bringing it to foreground. This command also provides details on list of stopped jobs as well as current background jobs
10) How to get details on current date and time in Linux?
Make use of the command date that shows details on current date and time. To get current month’s calendar use cal command
uptime – shows current uptime
11) What is difference between command df and du?
In linux both df and du are space related commands showing system space information
df – this command provides details on disk usage
du – To get details on directory space usage use this command
free – this command shows details on memory and swap usage
12) What are the different commands and options to compress files in Linux?
Lets start with creating a tar and name it test.tar containing the needed files
tar cf test.tar files
Once the tar is available, uploaded on AWS there is a need to untar the files. Use the command as follows:
tar xf file.tar
We can create a tar with gzip compression that will minimize the size of files to be transferred and creates test.tar.gz at the end
tar czf test.tar.gz files
To extract the gzipped tar compressed files use the command:
tar xzf test.tar.gz
Bzip2 compression can be used to create a tar as follows
tar cjf test.tar.bz2
To extract bzip2 compressed files use
tar xjf test.tar.bz2
To simply make use of gzip compression use
gzip testfile – This creates testfile.gz
To decompress testfile.gz use gzip -d testfile.gz
13) Give examples on some common networking commands you have made use of?
Note that AWS stack is primarily dependent on linux and over the cloud architecture makes it heavily network dependent. As a result AWS interview could be related to networking irrespective of your system admin, database admin, bigdata admin background. Learn these simple networking commands:
When a system is unreachable first step is to ping the host and make sure it is up and running
ping host – This pings the host and output results
Domain related commands as AWS has become preferred hosting for major itnernet based companies, SaaS firms
To get DNS information of the domain use – dig domain
To get whois information on domain use – whois domain
Host reverse lookup – dig -x host
Download file – wget file
To continue stopped download – wget -c file
14) What is your understanding of SSH?
SSH the secure shell is widely used for safe communication. This is a cryptographic network protocol used for operating network services securely over an unsecured network. Some of the commonly used ssh commands include
To connect to a host as a specified user using ssh use this command:
ssh username@hostname
To connect to a host on a specified port make use of this command
ssh -p portnumber username@hostname
To enable a keyed or passwordless login into specified host using ssh use
ssh-copy-id username@hostname
15) How do you perform search in Linux environment?
Searching and pattern matching are some common functions that typically happens in Linux environment. Here are the Linux commands:
grep – Grep command is the first and foremost when it comes to searching for files with pattern. Here is the usage:
grep pattern_match test_file – This will search for pattern_match in test_file
Search for pattern in directory that has set of files using recursive option as follows – grep -r pattern dir – Searches for pattern in directory recursively
Pattern can be searched in concatenation with another command (i.e) output of a command can be used as input for pattern search and amtch – firstcommand| grep pattern
To find all instances of a file use locate command – locate file
16) Give details on some user related commands in Linux:-
Here are some user related Linux commands:
w – displays details on who is online
whoami – to know whom you are logged in as
finger user – displays information about the user
17) How to get details on kernel information in Linux?
uname -a command provides details on kernel information
18) How to get CPU nd memory info in Linux machine?
Issue the following commands:
cat /proc/cpuinfo for cpu information
cat /proc/meminfo for memory information
19) What are the file system heirarchy related commands in linux?
File system hierarchi starting with raw disks, the way disks are formatted into files, files are grouped together as directory all are important for cracking AWS interview. Here are some file system hierarchy related commands that come handy
touch filename – creates a file with name filename. This command can also be used to update a file
ls- lists the directories
ls -al – All files including hidden files are listed with proper formatting
cd dir – change to specified directory
cd – Changes to home directory
pwd – called present working directory that shows details on current directory
Make a new directory using mkdir command as follows – mkdir directory_name
REmove file using rm commadn – rm file – removes file
To delete directory use -r option – rm -r directory_name
Remove a file forcefully using -f option – rm -f filename
To force remove directory use – rm -rd directory_name
Copy the contents from one file to another – copy file1 file2
Copy the contents across directory use – cp -r dir1 new_dir – If new directory does not exist create this first before issuing copy command
Move or rename a file using mv command – mv file1 new_File
If new_Dir is a file that already exists new_File will be directory into which file1 will be moved into
more filename – output the contents of the file
head file – output the first 10 lines of the file
tail file – output the last 10 lines of the file
tail -f filename – output the contents of the file as it grows, to start with display last 10 lines
Create symbolic link to a file using ln command – ln -s file link – called soft link
20) What commadn is used for displaying manual of a comamnd?
Make use of the command man command_name
21) Give details on app related comamnds in linux:-
which app – shows details on which app will be run by default
whereis app – shows possible locations of application
22) What are the default port numbers of http and https?
Questions on http and https port number is first step in launching webapp while customer reports an issue
Default port number of http is 80 (or) 8080
Default port number of https is 443
23) What is use of load balancer?
Load balancer is used to increase the capacity and reliability of applications. What capacity means is number of users connecting to applications. Load balancer distributes the traffic network and application traffic across many different servers increasing application capacity
24) What is sysprep tool?
System preparation tool comes as free tool with windows that can be accessed from %systemroot%\system32\sysprep folder. IT is used to duplicate, test and deliver new installation of windows based on established installation
25) User is nto able to RDP into server. What could be the reason?
Probable reason is that user is not part of remote desktop users local group of the terminal servers
26) How would you approach a customer issue?
Most work of AWS support engineer involves dealing with customer issue.As with any other support engineer AWS engineer should follow approach of question customer, listen to them, confirm what you have collected. This is called QLC approach much needed step to cover details on issue description
and confirm it
27) What types of questions can you ask customer?
A support engineer can ask two types of questions
1) Open ended questions – In this case your question will be single statement, answer you expect from customer is detailed
2) Closed questions – In this case your question will have answers yea (or) No, true (or) false type answers, single word answer in some cases
28) How do you consider customer from AWS technology perspective?
Even though the customer can be long standing customer of AWS, always think of customer as common man with no knowledge of AWS to talk more to them, explain more details to them to get correct issue description statement
29) Give details on operators in linux?
> – greater than symbol is input re-direction operator used to write content as input into file. Typically this is used to redirect the output of command into logfile. IF file already exists the contents are overwritten and only last recent content is retained
>> – this is same as input redirection except that this is appending content of a file if the file already exists
30) Explain difference between hardlink and softline in simple terms?
Hardlink is link to inode that talks about file contents, softlink is link to filename. If filename changes the changes are not reflected. For both hard and soft link ln command is used. In case of hardlink it will be simply ln, for soft link ln -s option is used
31) What are some common linux commands AWS engineer should be aware of?
1) cat – This is plain simple command to access a file in UNIX
2) ls – Provides details on list of files and directories
3) ps – The process command provides details on list of processes in the system
4) vmstat – Virtual memory statistics comes handy during performance tuning
5) iostat – Command to determine I/O issues
6) top – This command provides details on top resource consuming processes
7) sar – This is a UNIX utility mainly used for tuning purpose
8) rm – This command is used to remove files
9) mv – moving the files and directories
cd – Enables us to change directories
date – gives us the time and date
echo – we can display text on our screen
grep – It is a pattern recognition command.It enables us to see if a certain word or set of words occur in a file or the output of any other command.
history – gives us the commands entered previously by us or by other users
passwd – this command enables us to change our password
pwd – to find out our present working directory or to simply confirm our current location in the file system
uname – gives all details of the system when used with options. We get details including systemname,kernel version etc.
whereis – gives us exact location of the executable file for the utility in the question
which – the command enables us to find out which version(of possibly multiple versions)of the command the shell is using
who – this command provides us with a list of all the users currently logged into the system
whoami – this command indicates who you are logged in as. If user logs in as a userA and does a su to userB,whoami displays userA as the output.
man – this command will display a great detail of information about the command in the question
find – this command gives us the location of the file in a given path
more – this command shows the contents of a file,one screen at a time
ps – this command gives the list of all processes currently running on our system
cat – this command lets us to read a file
vi – this is referred to as text editor that enables us to read a file and write to it
emacs- this is a text editor that enables us to read a file and write to it
gedit – this editor enables us to read a file and write to it
diff – this command compares the two files, returns the lines that are different,and tells us how to make the files the same
export – we can make the variable value available to the child process by exporting the variable.This command is valid in bash,ksh.
setenv – this is same as export command and used in csh,tcsh
env – to display the set of environment variables at the prompt
echo <$variablsname> – displays the current value of the variable
source – whenever an environment variable is changed, we need to export the changes.source command is used to put the environment variable changes into immediate effect.It is used in csh,tcsh
.profile – in ksh,bash use . .profile command to get same result as using source command
set noclobber – to avoid accidental overwriting of an existing file when we redirect output to a file.It is a good idea to include this command in a shell-startup file such as .cshrc
32) What are the considerations while creating username/user logins for Security Administration purpose?
It is a good practice to follow certain rules while creating usernames/user logins
1) User name/user login must be unique
2) User name/user login must contain a combination of 2 to 32 letters, numerals, underscores(_),hyphens(-), periods(.)
3) There should not be any spaces/tab spaces while creating user name/usr logins
4) User name must begin with a letter and must have atleast one lowercase letter
5) Username must be between three to eight characters long
6) It is a best practice to have alphanumeric user names/user logins. It can be a combination of lower case letters, upper case letters, numerals, punctuations
33) Give details on /etc/profile the system profile file and its usage in linux environment:-
.This is another important UNIX system administration file. This file has much to do with user administration. This file is run when we first log into the system.This is system profile file. After this user profile file is run. User profile is the file wherein we define the users environment details.Following are teh different forms of user profile files :
.profile
.bash_profile
.login
.cshrc
/home/username is the default home directory.User’s profile file resides in the user’s home directory.
34) How to perform core file configuration in Linux environment?
Lets consider a linux flavor say solaris. Core File Configuration involves the following steps. We need to follow the steps given below to configure the core file.
1) As a root user, use the coreadm command to display the current coreadm configuration :
# coreadm
2) As a root user, issue the following command to change the core file setup :
# coreadm -i /cores/core_new.%n.%f
3) Run the coreadm command afain to verify that the changes has been made permanent
# coreadm
The O/P line “init core file pattern : ” will reflect the new changes made to the corefile configuration.
From solaris 10 onwards, coreadm process is configured by the Service Management Facility (SMF) at system boot time.We can use svcs command to check the status .The service name for coreadm process is :
svc:/system/coreadm:default
35) How do you configure or help with customer printer configuration?
Administering Printers details the steps needed to administer a printer.
Once the printer server and printer client is set up, we may need to perform the following administrative tasks frequently :
1) Check the status of printers
2) Restart the print scheduler
3) Delete remote printer access
4) Delete a printer
36) How is zombie process recognized in linux and its flavors? How do you handle zombie process in linux environment?
Zombie Process in UNIX/LINUX/Sun Solaris/IBM AIX is recognized by the state Z.It doesn’t use CPU resources.It still uses space in the process table.
It is a dead process whose parent did not clean up after it and it is still occupying space in the process table.
They are defunct processes that are automatically removed when a system reboots.
Keeping OS and applications up to date and with latest patches prevents zombie processes.
By properly using wait() call in parent process will prevent zombie processes.
SIGCHLD is teh signal sent by child to parent upon task completion and parent kills child(proper termination).
kill -18 PID – Kills childs process
37) What is the use of /etc/ftpd/ftpusers in Linux?
/etc/ftpd/ftpusers is used to restrict users who can use FTP(File Transfer Protocol).Ftp is a security threat as passwor is not encrypted while using ftp. ftp must not be used by sensitive user accounts such as root,snmp,uucp,bin,admin(default system user accounts).
As a security measure we have a file called /etc/ftpd/ftpusers created by default. The users listed in this file are not allowed to do ftp.The ftp server in.ftpd reads this file before allowing users to perform ftp. If we want to restrict a user from doing ftp we may have to include their name in this file.
38) Have you ever helped a customer restore a root file system in their environment?
Restoring root file system (/)  provides steps we need to follow to restore the root file system (/ system) in SPARC and x86 (intel) machines.
1) Log in as root user. It is a security practice to login as normal user and perform an su to take root user (super user) role.
2) Appearance of # prompt is an indication that the user is root
3) Use who -a command to get information about current user
4) When / (root filesystem) is lost because of disk failure. In this case we boot from CD or from the network.
5) Add a new system disk to the system on which we want to restore the root (/) file system
6) Create a file system using the command :
newfs /dev/rdsk/partitionname
7) Check the new file system with the fsck command :
fsck /dev/rdsk/partitionname
8) Mount the filesystem on a temporary mount point :
mount /dev/dsk/devicename /mnt
9) Change to the mount directory :
cd /mnt
10) Write protect the tape so that we can’t accidentally overwrite it. This is an optional but important step
11) Restore the root file system (/) by loading the first volume of the appropriate dump level tape into the tape drive. The appropriate dump level is the lowest dump level of all the tapes that need to be restored. Use the following command :
ufsrestore -rf /dev/rmt/n
12) Remove the tape and repeat the step 11 if there is more than one tape for the same level
13) Repeat teh step 11 and 12 with next ddump levels. Always begin with the lowest dump level and use highest ump level tape
14) Verify that file system has been restored :
la
15) Delete the restoresymtable file which is created and used by the ufsrestore utility :
rm restoresymtable
16) Change to the root directory (/) and unmount the newly restored file system
cd /
umount /mnt
17) Check the newly restored file system for consistency :
fsck /dev/rdsk/devicename
18) Create the boot blocks to restore the root file system :
installboot /usr/platform/sun4u/lib/fs/ufs/bootblk /dev/rdsk/devicename — SPARC system
installboot /usr/platform/`uname -i`/lib/fs/ufs/pboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/devicename — x86 system
19) Remove the last backup tape, and insert a new tape onto which we can write. Make a dump level 0 backup of the newly restored system by issuing the following command :
ufsdump 0ucf /dev/rmt/n /dev/rdsk/deviceName
This step is needed because ufsrestore repositions the files and changes the inode allocations – the old backup will not truly represent the newly restored file system
20) Reboot the system :
#reboot (or)
# init 6
System gets rebooted and newly restored file systems are ready to be used.
21) What is the monitoring and reporting tool that comes as part of AWS console?
Cloudwatch the tool listed under management section of AWS console helps with monitoring and reporting emtrics in AWS environment. Following metrics can be monitored as part of cloudwatch including
1) CPU
2) Disk utilization
3) Network
4) Status Check
In addition to the above mentioned metrics RAM the custom metric can be monitored using cloudwatch
22) Give details on status check in cloudwatch?
In an AWS environment status of both instance and system needs to be monitored. As such there are system status check as well as instance status check sections associated with each and every EC2 instance. As the name implies system status check makes sure that physical machines on which the instances have been hosted is in good shape. Instance status check is at the EC2 instance which literally translates to virtual machine in AWS environment
23) What happens if a failure is reported in status check section of AWS?
Depending on what type of failure has been reported following actions can be taken:
In case of system failure – Restart the virtual machine. In AWS terms, restart the EC2 instance. This will automatically bring up the virtual machine in a physical hardware that is issue free
Instance Failure – Depending on the type of failure reported in EC2 instance this can be stopping and starting of virtual machine to ix the issue. In case of disk failure appropriate action can be taken at operating system level to fix the issues
24) What is an EC2 instance in AWS?
This is the basic component of AWS infrastructure. EC2 translates to Elastic compute cloud. In real-time this is a pre-built virtual machine template hosted in AWS that can be chosen, customized to fit the application needs
This is the prime AWS service that eliminates a business necessity to own a data center to maintain their servers, hosts etc
25) What is an ephemeral storage?
An ephemeral storage is a storage that is temporary (or) non-persistent
26) What is the difference between instance and system status check in cloudwatch?
An instance status check checks the EC2 instance in an AWS environment whereas system status check checks the host
27) What is the meaning of EBS volume status check warning?
An EBS volume is degraded or severly degraded. Hence, a warning in an EBS environment is something that can’t be ignored as with other systems
28) What is the use of replicalag metric in AWS?
Replicalag is a metric used to monitor the lag between primary RDS the relational database service a database equivalent in AWS environment and the read replica the secondary database system that is in read only mode
29) What is the minimum granularity level that a cloudwatch can monitor?
Minimum granularity that cloudwatch can monitor is 1 minute. In most real-time cases 5 minute metric monitoring is configured
30) What is the meaning of ebs volume impaired ?
EBS volume impaired means that the volume is stalled or not available
31) Where is ELB latency reported?
In cloudwatch the latency reported by elastic load balancer ELB is available
32) What is included in EC2 instance launch log?
Once the EC2 instance is created, configured and launched following details are recorded in instance launch log:
Creating security groups – The result needs to be Successful. In case of issues the status will be different
Authorizing inbound rules – For proper authorization this should show Successful
Initiating launches – Again this has to be Successful
At the end we see a message that says Launch initiation complete
33) What will happen once an EC2 instance is launched?
After the EC2 instance has been launched it will be in running state. Once an instance is in running state it is ready for use. At this point usage hours which is typically billable resource usage starts. This keeps accruing until we stop or terminate our instance. NExt immediate step is to view instance

Free Oracle Database Articles, Tips, Jobs :

Delivered by FeedBurner

hadoop error could not find or load main class fs

As a first step in learning hadoop, I’m currently reading hortonworks tutorial to mirror datasets between hadoop clusters with apache falcon. I stumbled on error could not find or load main class fs
As a first step I tried logging in as falcon user
su – falcon
Now,I’m trying to create directory using hdfs command as provided in tutorial:
hadoop fs -mkdir /apps/falcon/primaryCluster
Error: could not find or load main class fs
As a next step I tried setting hadoop_prefix as follows
export HADOOP_PREFIX=/usr/hdp/current/hadoop-client
This did not fix the issue either
As a next step instead of fs I tried using dfs
hadoop dfs -mkdir /apps/falcon/primaryCluster
It did work fine
To confirm that folder got created I issued the following command and this did work fine
hdfs dfs -ls /apps/falcon

Bigdata Hadoop Hortonworks Articles FREE:

Delivered by FeedBurner

Hadoop data load interview question answer preparation

1) What are all the datasources from which we can load data into Hadoop?
Hadoop is an open source framework for supporting distributed data and processing of big data. First step would be to pump data into hadoop. Datasources can come in many different forms as follows:
1) Traditional relational databases like oracle
2) Data warehouses
3) Middle tier including web server and application server – Server logs from major source of information
4) Database logs
2) What tools are mainly used in data load scenarios?
Hadoop does offer data load into and out of hadoop from one or more of the above mentioned datasources. Tools including Sqoop, flume are used with data load scenarios. If you are from oracle background think of tools like datapump, sql*loader that help with data load. Though not exactly the same logicwise they match
3) What is a load scenario?
Bigdata loaded into hadoop can come from many different datasources. Depending on datasource origin there are many different load scenarios as follows:
1) Data at rest – Normal information stored in files, directories, sub-directories are considered data at rest. These files are not intended to be modified any further and are considered data at rest. To load such information HDFS shell commands like cp, copyfromlocal, put can be used
2) Data in motion – Also called as streaming data. This is a type of data that is continuously being updated. New information keeps on getting added to the datasource. Logs from webservers like apache, logs from application server, database server logs say alert.log in case of oracle database are all examples of data in motion. It is to be notes that multiple logs need to be merged before being uploaded onto hadoop
3) Data from web server – Web server logs
4) Data from datawarehouse – Data should be exporeted from traditional warehouses and imported onto hadoop. Tools like sqoop, bigsql load, jaql netezza can be used for this purpose
3) How does sqoop connect to relational databases?
Information stored in relational DBMS liek Oracle, MySQL, SQL Server etc can be loaded into Hadoop using sqoop. As with any load tool, sqoop needs some parameters to connect to RDBMS, pull information, upload the data into hadoop. Typically it includes
3.1) username/password
3.2) connector – this is a database specific JDBC driver needed to connect to many different databases
3.3) target-dir – This is the name of directory in HDFS into which information is loaded as csv file
3.4) WHERE – subset of rows from a table can be exported and loaded using WHERE clause

slamdata 3.0 makes live mongodb data display without ETL

Earlier we discussed about availability of slamdata a non-ETL solution to display live data from mongodb. It is interesting to know that slamdata has undergone rapid development over the pat two years and is currently at stable version 3.0
Slamdata makes it possible to display live data from mongodb database without using ETL tool. Hassle of extraction, transformation.loading, ETL mapping is being rid of using slamdata
Here are some interesting features of slamdata 3.0 that makes it a better choice to access live data from mongodb

1) Powerful API’s for developers – Utilizing the API’s charts can be easily embedded into mongodb applications. I remember working on reporting tool with mysql backend, perl libraries to chart the data in front-end. Now, this is a piece of cake utilizing slamdata API’s
2) Easy analytics using API’s – Analytics can be easily embedded into mongodb applications. If you are currently using panda python packages for your predictive analytics, you might be aware of power of analytics in predictive analytics of social media like twitter, facebook, linkedin etc. Try slamdata to easily implement predictive analytics in a mongodb application
3) Brand-new user interface with best look and feel – This is easy to use, brand new, extremely powerful
4) Dashboard types supported – This version supports both static and dynamic dashboard creation. Dashboard is created in user interface
5) Enhanced framework – The framework has become more extensible making ti possible to write connectors for databases beyond mongodb including Couchbase, MarkLogic, postgresql etc
6) Gallery of charts – A detailed roster of charts are supported including basic area, basic line, irregular line, area, line,stacked area, line, bar, scatter, candlestick,pie, radar, chord,fd charts ,maps,eventriver, heatmap,venn,tree,treemap,wordcloud etc
7) Powerful documentation from slamdata makes it the best analytic tool for mongodb

Get MongoDB Articles for FREE:

Delivered by FeedBurner

For storing Bigdata NoSQL databases like mongoDB come handy. These databases can easily store petabytes of data, can be scaled via sharding
Download slamdata 3.0 for free

Mongodb certification exam questions

Mongodb certification exam questions will help you prepare and crack the mongodb certification
1) How do you monitor mongodb instances?
a) mongodb utilities
b) Ops manager
c) database commands
d) All of the above
Answer : d
Explanation : Mongodb instance should be monitored starting with set of utilities that come pre-packaged as part of mongodb. These are mainly used for reporting purposes. Database commands come handy to get details on current database statistics. In addition to this mongodb cloud manager a cloud monitoring GUI, ops manage an on-premise install that has features equivalent to mongodb cloud manager help with visualization and alerts real-time from database
2) How do you start mongod and mongos instances using config file?
a) mongod -f /etc/mongod.conf; mongos -f /etc/mongos.conf
b) mongod -a /etc/mongod.conf; mongos -a /etc/mongos.conf
c) mongod -h /etc/mongod.conf; mongos -h /etc/mongos.conf
d) mongod -s /etc/mongod.conf; mongos -s /etc/mongos.conf
Answer : a
Explanation : We can start mongod and mongos instances from command-line as well as config files. To make use of config file, we specify option -f

Enter your email address:

Delivered by FeedBurner