All posts by Winsomesoft

Selenium Interview Questions And Answers

Our experts providing  Selenium Testing  interview questions & Answers/Faqs can develop your carrier & knowledge to find the right job in a good MNC’s, doesn’t matter what kind of company you’re hired.

1)What is Automation Testing?

Automation testing or Test Automation is a process of automating the manual process to test the application/system under test. Automation testing involves use to a separate testing tool which lets you create test scripts which can be executed repeatedly and doesn’t require any manual intervention.

2) What are the benefits of Automation Testing?

Benefits of Automation testing are:

Supports execution of repeated test cases
Aids in testing a large test matrix
Enables parallel execution
Encourages unattended execution
Improves accuracy thereby reducing human-generated errors
Saves time and money

3)Why should Selenium be selected as a test tool?

Selenium

is free and open source
have a large user base and helping communities
have cross Browser compatibility (Firefox, Chrome, Internet Explorer, Safari etc.)
have great platform compatibility (Windows, Mac OS, Linux etc.)
supports multiple programming languages (Java, C#, Ruby, Python, Pearl etc.)
has fresh and regular repository developments
supports distributed testing

4)How will you find an element using Selenium?

In Selenium every object or control in a web page is referred as an elements, there are different ways to find an element in a web page they are

ID
Name
Tag
Attribute
CSS
Linktext
PartialLink Text
Xpath etc

5)List out the test types that are supported by Selenium?

For web based application testing selenium can be used

The test types can be supported are

a) Functional, Learn More about Functional Testing.

b) Regression

For post release validation with continuous integration automation tool could be used

a) Jenkins

b) Hudson

c) Quick Build

d) CruiseCont

6)Mention what is the use of X-path?

X-Path is used to find the WebElement in web pages. It is also useful in identifying the dynamic elements.

7)Explain the difference between single and double slash in X-path?

Single slash ‘/ ’

Single slash ( / ) start selection from the document node
It allows you to create ‘absolute’ path expressions
Double Slash ‘// ’
Double slash ( // ) start selection matching anywhere in the document
It enables to create ‘relative’ path expressions

8)What is the difference between assert and verify commands?

Assert: Assert command checks whether the given condition is true or false. Let’s say we assert whether the given element is present on the web page or not. If the condition is true then the program control will execute the next test step but if the condition is false, the execution would stop and no further test would be executed.

Verify: Verify command also checks whether the given condition is true or false. Irrespective of the condition being true or false, the program execution doesn’t halt i.e. any failure during verification would not stop the execution and all the test steps would be executed.

9) What is JUnit Annotations and what are different types of annotations which are useful ?

In JAVA a special form of syntactic meta-data can be added to Java source code, this is know as Annotations. Variables, parameters, packages, methods and classes are annotated some of the Junit annotations which can be useful are

Test
Before
After
Ignore
BeforeClass
AfterClass
RunWith

10)Mention what is the difference between Implicit wait and Explicit wait?

Implicit Wait: Sets a timeout for all successive Web Element searches. For the specified amount of time it will try looking for element again and again before throwing a NoSuchElementException. It waits for elements to show up.

Explicit Wait : It is a one-timer, used for a particular search

11)Explain what is the difference between find elements () and find element () ?

find element ():

It finds the first element within the current page using the given “locating mechanism”. It returns a single WebElement

findElements () : Using the given “locating mechanism” find all the elements within the current page. It returns a list of web elements.

12)Explain what are the JUnits annotation linked with Selenium?

The JUnits annotation linked with Selenium are

@Before public void method() – It will perform the method () before each test, this method can prepare the test
@Test public void method() – Annotations @Test identifies that this method is a test method environment
@After public void method()- To execute a method before this annotation is used, test method must start with test@Before

13)What is the difference between close() and quit()?
The close() method closes the current browser only whereas quit() method closes all browsers opened by WebDriver.

14)What is the main disadvantage of implicit wait?
The main disadvantage of implicit wait is that it slows down test performance.

Another disadvantage of implicit wait is:

Suppose, you set the waiting limit to be 10 seconds and the elements appears in the DOM in 11 seconds, your tests will be failed because you told it to wait a maximum of 10 seconds.

15)Mention 5 different exceptions you had in Selenium web driver?

The 5 different exceptions you had in Selenium web drivers are

WebDriverException
NoAlertPresentException
NoSuchWindowException
NoSuchElementException
TimeoutException

16) How can you retrive the message in an alert box ?

You can use the storeAlert command which will fetch the message of the alert pop up and store it in a variable.

17) Explain what is the main difference between web-driver and RC ?

The main difference between Selenium RC and Webdriver is that, selenium RC injects javascript function into browsers when the page is loaded. On the other hand, Selenium Webdriver drives the browser using browsers built in support

18)What are the technical limitations while using Selenium RC?

Apart from “same origin policy” restriction from js, Selenium is also restricted from exercising anything that is outside browser.

19)Other than the default port 4444 how you can run Selenium Server?

You can run Selenium server on java-jar selenium-server.jar-port other than its default port

20)To enter values onto text boxes what is the command that can be used?

To enter values onto text boxes we can use command sendkeys()…….. For more   Click Here

For Course Content Click Here

AWS Technical Interview Questions and Answers

Our experts providing  AWS Technical interview questions & Answers/Faqs can develop your carrier & knowledge to find the right job in a good MNC’s, doesn’t matter what kind of company you’re hired.

1) Have you worked on containers ?
Containers are form of lightweight virtualization, more heavy than chroot but lighter than hypervisors. They provide isolation among processes while using same kernel as the host machine, and cgroups functionality within kernel. But container formats differ among themselves in a way that some provide more VM-like experience while other containerize only application.

LXC containers are most VM-like and most heavy weight, while Docker used to be more light weight and was initially designed for single application container. But in more recent releases Docker introduced whole machine containerization features so now Docker can be used both ways. There is also rkt from CoreOS and LXD from Canonical, which builds upon LXC.

2) What is Kubernetes? Explain
It is massively scalable tool for managing containers, made by Google. It is used internally on huge deployments and because of that it is maybe the best option for production use of containers. It supports self healing by restating non responsive containers, it pack containers in a way that they take less resources and has many other great features.

3) What is the function of CI (Continuous Integration) server ?
CI server function is to continuously integrate all changes being made and committed to repository by different developers and check for compile errors. It needs to build code several times a day, preferably after every commit so it can detect which commit made the breakage if the breakage happens.

Note: Other available and popular CI tools are Jenkins, TeamCity, CircleCI , Hudson, Buildbot etc

4) What is Continuous Delivery ?
Is it practice of delivering the software for testing as soon as it is build by CI (Continuous Integration) server’s. It requires heavy use of Versioning Control System for so always available to developers and testers alike.

5) What is Vagrant and what is it used for ?
Vagrant is a tool that can create and manage virtualized (or containerized) environments for testing and developing software. At first, Vagrant used virtualbox as the hypervisor for virtual environments, but now it supports also KVM.

6) Do you ever used any scripting language ?
As far as scripting languages go, the simpler the better. In fact, the language itself isn’t as important as understanding design patterns and development paradigms such as procedural, object-oriented, or functional programming.

Currently, several scripting languages are available so the question arises : what is the most appropriate language for DevOps approach? Simply everything , it depends on the context of the project and tools used for example if Ansible used its good have knowledge in Python and if its for Chef its on Ruby.

7) What is the role of a configuration management tool in devops ?
Automation plays an essential role in server configuration management. For that purpose we use CM tools , they store information about versions and builds of the software and testware and provide the traceability between software and testware.

8) What is the purpose of CM tools and which one you have used ?
Configuration Management tools’ purpose is to automatize deployment and configuration of software on big number of servers. Most CM tools usually use agent architecture which means that every machine being manged needs to have agent installed. My favorite tool is one that uses agentless architecture – Ansible. It only requires SSH and Python. And if raw module is being used, not even Python is required because it can run raw bash commands. Other available and popular CM tools are Puppet, Chef, SaltStack.

9) What is OpenStack ?
OpenStack is often called Cloud Operating System, and that is not far from the truth. It is the complete environment for deploying IaaS which gives you possibility of making your own cloud similar to AWS. It is highly modular and consists of many sub-projects so you can pick and chose which functionality you need. OpenStack distribution are available from Red Hat, Mirantis, HPE, Oracle, Canonical and many others. It is completely open source project but some vendors make proprietary distributions.

10) Classify Cloud Platforms anategory ?
Cloud Computing software can be classified as Software as a Service or SaaS, Infrastructure as a Service or IaaS and Platform as a Service or PaaS.

SaaS is peace of software that runs over network on remote server and has only user interface exposed to users, usually in web browser. For example salesforce.com.

Infrastructure as a service is a cloud environment that exposes VM to user to use as entire OS or container where you could install anything you would install on your server. Example for this would be OpenStack, AWS, Eucalyptus.
PaaS allows users to deploy their own application on the preinstalled platform, usually framework of application server and suite of developer tools. Examples for this would be OpenShHeroku.

11) What are easiest ways to build a small cloud ?
VMfest is one one of the options for making IaaS cloud from VirtualBox VMs in no time. If you want a lightweight PaaS there is Dokku which is basically a bash script that makes PaaS out of Dokku containers.

12) What is AWS (Amazon Web Services)? Did got chance to work on Amazon tools ?
AWS provides a set of flexible services designed to enable companies to create and deliver products with greater speed and reliability using AWS and DevOps practices . These services simplify commissioning and infrastructure management , application code deployment , automated software release process and monitoring of the application and infrastructure performance. Amazon used tools like AWS CodeCommit, AWS CodeDeploy, AWS CodePipeline etc, that helps to make devops easier.

13) What is EC2 ?
Amazon EC2 Container Service (ECS) is a highly scalable container management service and high performance that supports the Docker containers and allows you to easily run applications on a cluster managed by Amazon EC2 instances.

The EC2 service is inseparable from the concept of Amazon Machine Image – AMI . The May is Indeed the image of a virtual machine That Will Be Executed . EC2 based on XEN virtualization , that’s why it is quite easy to move XEN servers to EC2 .

14) Do you find any advantage of using NoSQL database over RDBMS ?
Typical web applications are built with a three-tier architecture. To carry the load, more Web servers are simply added behind a load balancer to support more users. The ability to scale out is a key principle in the world of cloud computing, more and more important in which VM instances can be easily added or removed to meet demand.

However, when it comes to the data layer, relational databases (RDBMS) does not allow a passage to the simple scale and do not provide a flexible data model. Manage more users means adding more servers and large servers are very complex, owners and disproportionately expensive, in contrast to low-cost hardware, the “commodity hardware”, architectures in the cloud. Organizations are beginning to see performance issues with their relational databases for existing or new applications. Especially as the number of users increases, they realize the need for a faster and more flexible basis. This is the time to begin to assess and adopt NoSQL database like in their Web applications.

15) What are the main SQL migration difficulties NoSQL ?
Each record in a relational database according to a schema – with a fixed number of fields (columns) each having a specified object and a data type. Each record is the same. The data is denormalized in several tables. The advantage is that there is less of duplicate data in the database. The downside is that a change in the pattern means performing several “alter table” that require expensive to lock multiple tables simultaneously to ensure that change does not leave the database in an inconsistent state.

With databases data, on the other hand, each document can have a completely different structure from other documents. No additional management is required on the database to manage changes in the schemes….. For more   Click Here

For Course Content   Click Here

AWS DevOps Interview Questions and Answers

Our experts providing  AWS  DevOps  interview questions & Answers/Faqs can develop your carrier & knowledge to find the right job in a good MNC’s, doesn’t matter what kind of company you’re hired.

Which are the areas where DevOps are implemented?

  • Production Development
  • IT Operations development
  • Creation of the production feedback and its development

What is the popular scripting language of DevOps?

  • Python.

What are the types of HTTP requests?

The types of Http requests are

  • GET
  • HEAD
  • PUT
  • POST
  • PATCH
  • DELETE
  • TRACE
  • CONNECT
  • OPTIONS

What are the advantages of DevOps?
Technical benefits:
Continuous software delivery
Less complex problems to fix
Faster resolution of problems

Business benefits:
Faster delivery of features
More stable operating environments
More time available to add value (rather than fix/maintain)

What are the core operations of DevOps in terms of development and Infrastructure?
The core operations of DevOps:

  • Application development
  • Code developing
  • Code coverage
  • Unit testing
  • Packaging

Deployment With infrastructure

  • Provisioning
  • Configuration
  • Orchestration
  • Deployment

For More Interview Questions Click Here

For Course Content Click Here

AWS Developer Interview Questions And Answers

Our experts providing  AWS Developer interview questions & Answers/Faqs can develop your carrier & knowledge to find the right job in a good MNC’s, doesn’t matter what kind of company you’re hired.

1) Explain what is AWS?1) Explain what is AWS?
AWS stands for Amazon Web Service; it is a collection of remote computing services also known as cloud computing platform.  This new realm of cloud computing is also known as IaaS or Infrastructure as a Service.
2) Mention what are the key components of AWS?
The key components of AWS are
Route 53: A DNS web serviceSimple E-mail Service: It allows sending e-mail using RESTFUL API call or via regular SMTPIdentity and Access Management: It provides enhanced security and identity management for your AWS accountSimple Storage Device or (S3): It is a storage device and the most widely used AWS serviceElastic Compute Cloud (EC2): It provides on-demand computing resources for hosting applications. It is very useful in case of unpredictable workloadsElastic Block Store (EBS): It provides persistent storage volumes that attach to EC2 to allow you to persist data past the lifespan of a single EC2CloudWatch: To monitor AWS resources, It allows administrators to view and collect key Also, one can set a notification alarm in case of trouble.

3) Explain what is S3?
S3 stands for Simple Storage Service. You can use S3 interface to store and retrieve any amount of data, at any time and from anywhere on the web.  For S3, the payment model is “pay as you go”.
4) Explain what is AMI?
AMI stands for Amazon Machine Image.  It’s a template that provides the information (an operating system, an application server and applications) required to launch an instance, which is a copy of the AMI running as a virtual server in the cloud.  You can launch instances from as many different AMIs as you need.
5) Mention what is the relation between an instance and AMI?
From a single AMI, you can launch multiple types of instances.  An instance type defines the hardware of the host computer used for your instance. Each instance type provides different compute and memory capabilities.  Once you launch an instance, it looks like a traditional host, and we can interact with it as we would with any computer.
aws-logo
6) What does an AMI include?
An AMI includes the following things
A template for the root volume for the instanceLaunch permissions decide which AWS accounts can avail the AMI to launch instancesA block device mapping that determines the volumes to attach to the instance when it is launched

7) How can you send request to Amazon S3?
Amazon S3 is a REST service, you can send request by using the REST API or the AWS SDK wrapper libraries that wrap the underlying Amazon S3 REST API.

 8) Mention what is the difference between Amazon S3 and EC2?The difference between EC2 and Amazon S3 is that
EC2 S3It is a cloud web service used for hosting your applicationIt is a data storage system where any amount of data can be storedIt is like a huge computer machine which can run either Linux or Windows and can handle application like PHP, Python, Apache or any databasesIt has a REST interface and uses secure HMAC-SHA1 authentication keys

9) How many buckets can you create in AWS by default?
By default, you can create upto 100 buckets in each of your AWS accounts.
10) Explain can you vertically scale an Amazon instance? How?

Yes, you can vertically scale on Amazon instance. For that
Spin up a new larger instance than the one you are currently runningPause that instance and detach the root webs volume from the server and discardThen stop your live instance and detach its root volumeNote the unique device ID and attach that root volume to your new serverAnd start it again

11) Explain what is T2 instances?
T2 instances are designed to provide moderate baseline performance and the capability to burst to higher performance as required by workload.
12) In VPC with private and public subnets, database servers should ideally be launched into which subnet?
With private and public subnets in VPC, database servers should ideally launch into private subnets.
13) Mention what are the security best practices for Amazon EC2?
For secure Amazon EC2 best practices, follow the following steps
Use AWS identity and access management to control access to your AWS resourcesRestrict access by allowing only trusted hosts or networks to access ports on your instanceReview the rules in your security groups regularlyOnly open up permissions that your requireDisable password-based login, for instance, launched from your AMI

14) Explain how the buffer is used in Amazon web services?
The buffer is used to make the system more robust to manage traffic or load by synchronizing different component.  Usually, components receive and process the requests in an unbalanced way, With the help of buffer, the components will be balanced and will work at the same speed to provide faster services.
15) While connecting to your instance what are the possible connection issues one might face?
The possible connection errors one might encounter while connecting instances are
Connection timed outUser key not recognized by the serverHost key not found, permission deniedUnprotected private key fileServer refused our key or No supported authentication method availableError using MindTerm on Safari BrowserError using Mac OS X RDP Client…… For more   Click Here

For Course Content   Click Here

AWS Database Interview Questions And Answers

Our experts providing  AWS Database  interview questions & Answers/Faqs can develop your carrier & knowledge to find the right job in a good MNC’s, doesn’t matter what kind of company you’re hired.

1. If I launch a standby RDS instance, will it be in the same Availability Zone as my primary?
A. Only for Oracle RDS types
B. Yes
C. Only if it is configured at launch
D. No
Answer D.

Explanation: No, since the purpose of having a standby instance is to avoid an infrastructure failure (if it happens), therefore the standby instance is stored in a different availability zone, which is a physically different independent infrastructure.

2. When would I prefer Provisioned IOPS over Standard RDS storage?
A. If you have batch-oriented workloads
B. If you use production online transaction processing (OLTP) workloads.
C. If you have workloads that are not sensitive to consistent performance
D. All of the above
Answer A.

Explanation: Provisioned IOPS deliver high IO rates but on the other hand it is expensive as well. Batch processing workloads do not require manual intervention they enable full utilization of systems, therefore a provisioned IOPS will be preferred for batch oriented workload.

3. How is Amazon RDS, DynamoDB and Redshift different?
Amazon RDS is a database management service for relational databases, it manages patching, upgrading, backing up of data etc. of databases for you without your intervention. RDS is a Db management service for structured data only.
DynamoDB, on the other hand, is a NoSQL database service, NoSQL deals with unstructured data.
Redshift, is an entirely different service, it is a data warehouse product and is used in data analysis.

4. If I am running my DB Instance as a Multi-AZ deployment, can I use the standby DB Instance for read or write operations along with primary DB instance?
A. Yes
B. Only with MySQL based RDS
C. Only for Oracle RDS instances
D. No
Answer D.

Explanation: No, Standby DB instance cannot be used with primary DB instance in parallel, as the former is solely used for standby purposes, it cannot be used unless the primary instance goes down.

5. Your company’s branch offices are all over the world, they use a software with a multi-regional deployment on AWS, they use MySQL 5.6 for data persistence.
The task is to run an hourly batch process and read data from every region to compute cross-regional reports which will be distributed to all the branches. This should be done in the shortest time possible. How will you build the DB architecture in order to meet the requirements?

A. For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region
B. For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the HQ region
C. For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots to the HQ region
D. For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files hourly to the HQ region
Answer A.

Explanation: For this we will take an RDS instance as a master, because it will manage our database for us and since we have to read from every region, we’ll put a read replica of this instance in every region where the data has to be read from. Option C is not correct since putting a read replica would be more efficient than putting a snapshot, a read replica can be promoted if needed to an independent DB instance, but with a Db snapshot it becomes mandatory to launch a separate DB Instance.

6. Can I run more than one DB instance for Amazon RDS for free?
Yes. You can run more than one Single-AZ Micro database instance, that too for free! However, any use exceeding 750 instance hours, across all Amazon RDS Single-AZ Micro DB instances, across all eligible database engines and regions, will be billed at standard Amazon RDS prices. For example: if you run two Single-AZ Micro DB instances for 400 hours each in a single month, you will accumulate 800 instance hours of usage, of which 750 hours will be free. You will be billed for the remaining 50 hours at the standard Amazon RDS price.

For a detailed discussion on this topic, please refer our RDS AWS blog.

7. Which AWS services will you use to collect and process e-commerce data for near real-time analysis?
A. Amazon ElastiCache
B. Amazon DynamoDB
C. Amazon Redshift
D. Amazon Elastic MapReduce
Answer B,C.

Explanation: DynamoDB is a fully managed NoSQL database service. DynamoDB, therefore can be fed any type of unstructured data, which can be data from e-commerce websites as well, and later, an analysis can be done on them using Amazon Redshift. We are not using Elastic MapReduce, since a near real time analyses is needed.

8. Can I retrieve only a specific element of the data, if I have a nested JSON data in DynamoDB?
Yes. When using the GetItem, BatchGetItem, Query or Scan APIs, you can define a Projection Expression to determine which attributes should be retrieved from the table. Those attributes can include scalars, sets, or elements of a JSON document.

9. A company is deploying a new two-tier web application in AWS. The company has limited staff and requires high availability, and the application requires complex queries and table joins. Which configuration provides the solution for the company’s requirements?
A. MySQL Installed on two Amazon EC2 Instances in a single Availability Zone
B. Amazon RDS for MySQL with Multi-AZ
C. Amazon ElastiCache
D. Amazon DynamoDB
Answer D.

Explanation: DynamoDB has the ability to scale more than RDS or any other relational database service, therefore DynamoDB would be the apt choice.

10. What happens to my backups and DB Snapshots if I delete my DB Instance?
When you delete a DB instance, you have an option of creating a final DB snapshot, if you do that you can restore your database from that snapshot. RDS retains this user-created DB snapshot along with all other manually created DB snapshots after the instance is deleted, also automated backups are deleted and only manually created DB Snapshots are retained.

11. Which of the following use cases are suitable for Amazon DynamoDB?Choose 2 answers

A. Managing web sessions.
B. Storing JSON documents.
C. Storing metadata for Amazon S3 objects.
D. Running relational joins and complex updates.
Answer C,D.

Explanation: If all your JSON data have the same fields eg [id,name,age] then it would be better to store it in a relational database, the metadata on the other hand is unstructured, also running relational joins or complex updates would work on DynamoDB as well.

12. How can I load my data to Amazon Redshift from different data sources like Amazon RDS, Amazon DynamoDB and Amazon EC2?
You can load the data in the following two ways:

You can use the COPY command to load data in parallel directly to Amazon Redshift from Amazon EMR, Amazon DynamoDB, or any SSH-enabled host.
AWS Data Pipeline provides a high performance, reliable, fault tolerant solution to load data from a variety of AWS data sources. You can use AWS Data Pipeline to specify the data source, desired data transformations, and then execute a pre-written import script to load your data into Amazon Redshift.

13. Your application has to retrieve data from your user’s mobile every 5 minutes and the data is stored in DynamoDB, later every day at a particular time the data is extracted into S3 on a per user basis and then your application is later used to visualize the data to the user. You are asked to optimize the architecture of the backend system to lower cost, what would you recommend?
A. Create a new Amazon DynamoDB (able each day and drop the one for the previous day after its data is on Amazon S3.
B. Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput.
C. Introduce Amazon Elasticache to cache reads from the Amazon DynamoDB table and reduce provisioned read throughput.
D. Write data directly into an Amazon Redshift cluster replacing both Amazon DynamoDB and Amazon S3.
Answer C.

Explanation: Since our work requires the data to be extracted and analyzed, to optimize this process a person would use provisioned IO, but since it is expensive, using a ElastiCache memoryinsread to cache the results in the memory can reduce the provisioned read throughput and hence reduce cost without affecting the performance.

14. You are running a website on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency model. After comprehensive tests you discover that there is read contention on RDS MySQL. Which are the best approaches to meet these requirements? (Choose 2 answers)
A. Deploy ElastiCache in-memory cache running in each availability zone
B. Implement sharding to distribute load to multiple RDS MySQL instances
C. Increase the RDS MySQL Instance size and Implement provisioned IOPS
D. Add an RDS MySQL read replica in each availability zone
Answer A,C.

Explanation: Since it does a lot of read writes, provisioned IO may become expensive. But we need high performance as well, therefore the data can be cached using ElastiCache which can be used for frequently reading the data. As for RDS since read contention is happening, the instance size should be increased and provisioned IO should be introduced to increase the performance.

15. A startup is running a pilot deployment of around 100 sensors to measure street noise and air quality in urban areas for 3 months. It was noted that every month around 4GB of sensor data is generated. The company uses a load balanced auto scaled layer of EC2 instances and a RDS database with 500 GB standard storage. The pilot was a success and now they want to deploy at least 100K sensors which need to be supported by the backend. You need to store the data for at least 2 years to analyze it. Which setup of the following would you prefer?
A. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance
B. Ingest data into a DynamoDB table and move old data to a Redshift cluster
C. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage
D. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS
Answer C.
Explanation: A Redshift cluster would be preferred because it easy to scale, also the work would be done in parallel through the nodes, therefore is perfect for a bigger workload like our use case. Since each month 4 GB of data is generated, therefore in 2 year, it should be around 96 GB. And since the servers will be increased to 100K in number, 96 GB will approximately become 96TB. Hence option C is the right answer…… For more   Click Here

For Course Content   Click Here

Peoplesoft Interview Questions And Answers

Our experts providing  Peoplesoft HCM  interview questions & Answers/Faqs can develop your carrier & knowledge to find the right job in a good MNC’s, doesn’t matter what kind of company you’re hired.

1)Explain what is PeopleSoft?

PeopleSoft is an organization that provides e-business application software over the internet. It provides software for Human resource management, Supply chain management, CRM or Customer Relationship Management, Enterprise Performance Management and so on.

2)Mention what all technical things can PeopleSoft billing can do?

With this application of PeopleSoft many things can be done like

Create bills
Receive billing data from other PeopleSoft applications
Receive billing data from other applications
Create recurring bills, installment bills, inter & intraunit bills and so on
Review and validate bills
Adjust invoices
Calculate sales, use, and value-added taxes
Defer revenue accounting and so on

3)How many types of pages are available in Peoplesoft?
There are total Nine types of pages are available in people soft.
Standard page
Secondary page
Sub page
Popup page
Header and
Footer page
Layout page
Search page
Prompt page

4)Explain what is the use of Publish Utility in PeopleSoft?

The publish utility automates the procedure of copying the contents of the entire table into a legacy system or remote database.

5)Mention what is PeopleSoft Multi-Channel framework?

PeopleSoft multi-channel framework provides an integrated infrastructure to support multiple interaction channels for call center agents or other PeopleSoft users who must respond to notifications and incoming requests. PeopleSoft multi-channel framework supports following channels.

Web collaboration (Chat)
Voice (Telephone)
E-mail
Instant Messaging

6)Mention what are the different types of service operation does PeopleSoft Integration Broker provides?

PeopleSoft integration broker supports four types of services

Asynchronous one-way
Asynchronous response/request
Asynchronous to synchronous
Synchronous

7)What are the main attributes of a Component Interface (CI)?
Keys, Properties & Collections, Methods and Name

8)You want to update your password and enter a hint for forgotten password What would you access?
User Profile

9)Customization done in Dev DB, Which tool I will use to move it to Prod DB?
Use App Designer and Go Tools > Copy Project > To Database

10)What are parts of an AE program?
Section, Step and Action

11)Which Web Services is only used as a Proxy Server?
MS IIS
Apache

12)Where does you set the web server cache?
Webserver configuration.properties file

13)Mention where you can add a value to the underlying table in PeopleSoft?

In PeopleSoft, you can add a value into the “prompt table with no edit”.

14)Explain in what ways you can create exceptions in PeopleSoft?

In PeopleSoft, exceptions are handled in two ways

Creating an exception base class that wraps the built-in function call and handles its function parameters consistently, which is more common way
By calling the built-in function CreateException

15)How do you troubleshoot Application messages staying in Working status?
Possible cause:
1. Message Handler has crashed.
2. The Message Handler processing the message is on another machine, and either the machine or the application server domain is down. The Message handler working on the message is “blocked”. The service will timeout, and the Message Dispatcher will retry the message.

16)what happens when changing from NO EDIT TO EDIT option?

user can type only prompt table values and the default values gets populated from the database.

17)What are the views available in App?

1. Development
2. Upgrade

18)What are Menu types available?

1. Component
2. PeopleCode
3. Separator

19)What are the Search Keys you use to find Patches and Fixes?

1. Release
2. Updated date time
3. Report Id

20)Mention what are the tools are provided by PeopleSoft for testing your integration development?

The tools that are provided by PeopleSoft for testing your integration development

Send master utility
Simple post utility
Automated integration point testing
Transformation test utility
Handler tester
Schema tester…………… For more   Click Here

For Course Content   Click Here

Ethical Hacker Interview Questions And Answers

Our experts providing  Ethical Hacker interview questions & Answers/Faqs can develop your carrier & knowledge to find the right job in a good MNC’s, doesn’t matter what kind of company you’re hired.

1)Explain what is Ethical Hacking?

Ethical Hacking is when a person is allowed to hacks the system with the permission of the product owner to find weakness in a system and later fix them.

2)List out some of the common tools used by Ethical hackers?

Meta Sploit
Wire Shark
NMAP
John The Ripper
Maltego

3)What are the types of ethical hackers?

The types of ethical hackers are

Grey Box hackers or Cyberwarrior
Black Box penetration Testers
White Box penetration Testers
Certified Ethical hacker

4)What is Enumeration ?

Enumeration is defined as the process of extracting user names, machine names, network resources, shares, and services from a system. Enumeration techniques are conducted in an Intranet Environment.

5)What is LDAP ( Lightweight Directory Access Protocol ) ?

The Lightweight Directory Access protocol is a protocol used to access the directory listings within Active Directory or from the other directory services.

6) Explain what is Brute Force Hack?

Brute force hack is a technique for hacking password and get access to system and network resources, it takes much time, it needs a hacker to learn about JavaScripts. For this purpose, one can use tool name “Hydra”.

7) Explain what is Network Sniffing?

A network sniffer monitors data flowing over computer network links. By allowing you to capture and view the packet level data on your network, sniffer tool can help you to locate network problems. Sniffers can be used for both stealing information off a network and also for legitimate network management.

8)What are the types of hacking stages ?

a. Gain access

b. Getting privilages

c. Executing applications

d. Hiding the files

e. Covering the tracks

9)Types of password cracking techniques?

a. Dictionary attacks

b. Brute Forcing Attacks

c. Hybrid Attack

d. Syllable Attack

e. Rule – based Attack

10)What is MIB ( Management Information Base )?

It is a database (virtual) that contains information about all the network objects that are their in the SNMP. This data base in hierarchic and all the objects contained in it are addressed by object identifier.

11)What is NTP ?

This is protocol whose main function is to synchronize the clocks in the networked or connected computers.

12)Explain what is Pharming and Defacement?

Pharming: In this technique the attacker compromises the DNS ( Domain Name System) servers or on the user computer so that traffic is directed to a malicious site
Defacement: In this technique the attacker replaces the organization website with a different page. It contains the hackers name, images and may even include messages and background music

13)Explain what is Keylogger Trojan?

Keylogger Trojan is malicious software that can monitor your keystroke, logging them to a file and sending them off to remote attackers. When the desired behaviour is observed, it will record the keystroke and captures your login username and password.

14)Definition and types of scanning?

Scanning refers to a set of procedures for identifying hosts, ports, and services in a network. Scanning is one of the components of intelligence gathering for an attacker to create a profile of the target organization.Scanning types :Port ScanningVulnerability ScanningNetwork Scanning………… For more  Click Here

For Course Content Click Here

Hadoop Interview Questions And Answers

Our experts providing  Hadoop  interview questions & Answers/Faqs can develop your carrier & knowledge to find the right job in a good MNC’s, doesn’t matter what kind of company you’re hired.

1)Explain “Big Data” and what are five V’s of Big Data?
“Big data” is the term for a collection of large and complex data sets, that makes it difficult to process using relational database management tools or traditional data processing applications. It is difficult to capture, curate, store, search, share, transfer, analyze, and visualize Big data. Big Data has emerged as an opportunity for companies. Now they can successfully derive value from their data and will have a distinct advantage over their competitors with enhanced business decisions making capabilities.

♣ Tip: It will be a good idea to talk about the 5Vs in such questions, whether it is asked specifically or not!

Volume: The volume represents the amount of data which is growing at an exponential rate i.e. in Petabytes and Exabytes.
Velocity: Velocity refers to the rate at which data is growing, which is very fast. Today, yesterday’s data are considered as old data. Nowadays, social media is a major contributor in the velocity of growing data.
Variety: Variety refers to the heterogeneity of data types. In another word, the data which are gathered has a variety of formats like videos, audios, csv, etc. So, these various formats represent the variety of data.
Veracity: Veracity refers to the data in doubt or uncertainty of data available due to data inconsistency and incompleteness. Data available can sometimes get messy and maybe difficult to trust. With many forms of big data, quality and accuracy are difficult to control. The volume is often the reason behind for the lack of quality and accuracy in the data.
Value: It is all well and good to have access to big data but unless we can turn it into a value it is useless. By turning it into value I mean, Is it adding to the benefits of the organizations? Is the organization working on Big Data achieving high ROI (Return On Investment)? Unless, it adds to their profits by working on Big Data, it is useless.

2)What is Hadoop and its components.
When “Big Data” emerged as a problem, Apache Hadoop evolved as a solution to it. Apache Hadoop is a framework which provides us various services or tools to store and process Big Data. It helps in analyzing Big Data and making business decisions out of it, which can’t be done efficiently and effectively using traditional systems.

♣ Tip: Now, while explaining Hadoop, you should also explain the main components of Hadoop, i.e.:

Storage unit– HDFS (NameNode, DataNode)
Processing framework– YARN (ResourceManager, NodeManager)

3)Name some companies that use Hadoop.?

Yahoo (One of the biggest user & more than 80% code contributor to Hadoop)
Facebook
Netflix
Amazon
Adobe
eBay
Hulu
Spotify
Rubikloud
Twitter

4)What are active and passive “NameNodes”?
In HA (High Availability) architecture, we have two NameNodes – Active “NameNode” and Passive “NameNode”.

Active “NameNode” is the “NameNode” which works and runs in the cluster.
Passive “NameNode” is a standby “NameNode”, which has similar data as active “NameNode”.
When the active “NameNode” fails, the passive “NameNode” replaces the active “NameNode” in the cluster. Hence, the cluster is never without a “NameNode” and so it never fails.

5)What is a checkpoint?
In brief, “Checkpointing” is a process that takes an FsImage, edit log and compacts them into a new FsImage. Thus, instead of replaying an edit log, the NameNode can load the final in-memory state directly from the FsImage. This is a far more efficient operation and reduces NameNode startup time. Checkpointing is performed by Secondary NameNode.

6)What is the port number for NameNode, Task Tracker and Job Tracker?

NameNode 50070

Job Tracker 50030

Task Tracker 50060

7)What does ‘jps’ command do?
The ‘jps’ command helps us to check if the Hadoop daemons are running or not. It shows all the Hadoop daemons i.e namenode, datanode, resourcemanager, nodemanager etc. that are running on the machine.

8) Explain about the indexing process in HDFS?
Indexing process in HDFS depends on the block size. HDFS stores the last part of the data that further points to the address where the next part of data chunk is stored.

9)Whenever a client submits a hadoop job, who receives it?

NameNode receives the Hadoop job which then looks for the data requested by the client and provides the block information. JobTracker takes care of resource allocation of the hadoop job to ensure timely completion.

10)What are the main configuration parameters in a “MapReduce” program?
The main configuration parameters which users need to specify in “MapReduce” framework are:

Job’s input locations in the distributed file system
Job’s output location in the distributed file system
Input format of data
Output format of data
Class containing the map function
Class containing the reduce function
JAR file containing the mapper, reducer and driver classes

11)What is the purpose of “RecordReader” in Hadoop?
The “InputSplit” defines a slice of work, but does not describe how to access it. The “RecordReader” class loads the data from its source and converts it into (key, value) pairs suitable for reading by the “Mapper” task. The “RecordReader” instance is defined by the “Input Format”.

12)How do “reducers” communicate with each other?
This is a tricky question. The “MapReduce” programming model does not allow “reducers” to communicate with each other. “Reducers” run in isolation.

13)What is a “Combiner”?
A “Combiner” is a mini “reducer” that performs the local “reduce” task. It receives the input from the “mapper” on a particular “node” and sends the output to the “reducer”. “Combiners” help in enhancing the efficiency of “MapReduce” by reducing the quantum of data that is required to be sent to the “reducers”.

14) What are the different relational operations in “Pig Latin” you worked with?
Different relational operators are:

for each
order by
filters
group
distinct
join
limit

15) What is a UDF?
If some functions are unavailable in built-in operators, we can programmatically create User Defined Functions (UDF) to bring those functionalities using other languages like Java, Python, Ruby, etc. and embed it in Script file.

16)What are the components of Apache HBase?
HBase has three major components, i.e. HMaster Server, HBase RegionServer and Zookeeper.

Region Server: A table can be divided into several regions. A group of regions is served to the clients by a Region Server.
HMaster: It coordinates and manages the Region Server (similar as NameNode manages DataNode in HDFS).
ZooKeeper: Zookeeper acts like as a coordinator inside HBase distributed environment. It helps in maintaining server state inside the cluster by communicating through sessions.

17) Explain about the different catalog tables in HBase?

The two important catalog tables in HBase, are ROOT and META. ROOT table tracks where the META table is and META table stores all the regions in the system.

18)Differentiate between Sqoop and distCP.

DistCP utility can be used to transfer data between clusters whereas Sqoop can be used to transfer data only between Hadoop and RDBMS.

19)How would you check whether your NameNode is working or not?

There are several ways to check the status of the NameNode. Mostly, one uses the jps command to check the status of all daemons running in the HDFS

20)What is checkpointing in Hadoop?
Checkpointing is the process of combining the Edit Logs with the FsImage (File system Image). It is performed by the Secondary NameNode………..For more   Click Here

For Course Content  Click Here