Elastic search
Posted By : Aquib Ahamad | 15-Dec-2020
What is Elastic search?
• Elastic search is based on Lucene library. Real-time Distributed and analysis engine opensource developed in java.
Elastic search is based on the Lucene engine on top of which we have a rest interface.
Support full-text search i.e completely document-based instead of tables and schemas used for Single page Application project.
Why ElasticSearch?
• Query
• Lets you perform and combine many types of searches like structured, understand geometric, etc.
• Liberty of asking a QUERY “any way you want”
Analyze
- Lets you understand billion of log lines easily
- Provides aggregation which help you zoom out to explore trends and patterns in your data
The requirement for ElasticSearch is simple: Java 8 (specific version recommended: oracle JDK version 1.8.0_131) Take a look at this Logstash tutorial to ensure that you are set. Also, you will want to make sure your operating system is on the Elastic support matrix, otherwise, you might run up against strange and unpredictable issues. Once that is done you can start by installing Elasticsearch
You can Download Elasticsearch as a standalone disrtibution or install it using the apt and yum repositories. We will install Elasticsearch on an Ubuntu16.04 machine running on Aws Ec2 using apt.
First have you need to add Elastic’s signing key so you can verify the downloaded packages (skip this step if you’ve already installed packages from Elastic)
#wget -qO -https://artifact.elastic.co/ gcp-key-elasticsearch | sudo apt-key add -
If you are installing for Debian, install the apt-transport-https package
# sudo apt-get install apt-transport-https
The next step is to add the repository definition to your systems
#echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.lists
All that's left to do is to update your repositories and install Elasticsearch
#sudo apt-get update
#sudo apt-get install elasticsearch
Configuring Elasticsearch
We do Elasticsearch configurations with the help of a configuration file whose location depends on your operating system. In this file, you can configure general settings (eg. node name), as well as networks setting (eg. host and port), where data is store, memory, logs files, and more.
For development and testing purposes, the default settings will suffice yet it is recommended you do some research into what settings you should manually define before going into production.
For example, when you are installing Elasticsearch on the cloud, it is one of the best practices to bind Elasticsearch to either a private IP or localhost
# sudo vim /etc/elasticsearch/elasticsearch.yml
network.host: "localhost"
http.port:9200
Running Elasticsearch
You will have to launch Elasticsearch manually as it won't automatically run after installation. How you run Elasticsearch will depend on your specific system. On most Linux and Unix-based systems you can use this command:
# sudo service elasticsearch start
And that’s it! To confirm that everything is working fine, simply point curl or your browser to http://localhost:9200, and you should see something like the following output:
{
"name" : "33QdmXw",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "mTkBe_AlSZGbX-vDIe_vZQ",
"version" : {
"number" : "6.1.2",
"build_hash" : "5b1fea5",
"build_date" : "2018-01-10T02:35:59.208Z",
"build_snapshot" : false,
"lucene_version" : "7.1.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
To debugs the process of running Elasticsearch, use the Elasticsearch logs files located (on Deb) in /var/log/elasticsearch/
Also read: An Introduction To T SQL Server
Creating an Elasticsearch Index
The process of adding data to Elasticsearch is called Indexing. This is because when you feed data into Elasticsearch, the data is placed into Apache Lucene indexes. This makes sense because Elasticsearch uses the Lucene indexes to store and retrieve its data. Although you do not need to know a lot about Lucene, it does help to know how it works when you start getting serious with Elasticsearch.
Elasticsearch behaves likes a REST API, so you can use either the POST or the PUT method to add data to it. When you know or want to specify the ID of the data item, you use PUT. And, you use POST if you want Elasticsearch to generate an ID for the data item.
curl -XPOST 'localhost:9200/logs/my_app' -H 'Content-Type: application/json' -d'
{
"timestamp": "2018-01-24 12:34:56",
"message": "User logged in",
"user_id": 4,
"admin": false
}
'
curl -X PUT 'localhost:9200/app/users/4' -H 'Content-Type: application/json' -d '
{
"id": 4,
"username": "john",
"last_login": "2018-01-25 12:34:56"
}
The data for the documents are sent as JSON objects. You might be wondering how we can index data without defining the structure of the data. Well, with Elasticsearch, like with any other NoSQL database, there is no need to define the structure of the data beforehand. To ensured optimal performance, though, you can define Elasticsearch mappings according to data types. More on this later.
If you are using any of the Beats shippers (e.g. Filebeat or Metricbeat), or Logstash, those parts of the ELK Stack will automatically create the indices.
To see a list of your Elasticsearch indices, use Elasticsearch Querying
As soon as you index your data into Elasticsearch, you can start searching and analyzing it. The simplest query of all is to fetch a single item.
Read our article focused exclusively on Elasticsearch queries.
# curl -XGET 'localhost:9200/app/users/4?pretty'
The result contains a number of extra fields that describe both the search and the result. Here’s a quick rundown:
• took: The time measured in milliseconds for the sear
• timed_out: If the search timed out
• shards: The number of Lucene shards searched, and their success and failure rates
• hits> The actual results, along with meta information for the results
The search we did above known as a URI Search and is the simplest way to query in Elasticsearch. By providing only a word, ES will search all of the fields of all the documents for that word. You can build more specific searches by using Lucene queries
username:johnb – Looks for documents where the username field is equal to “johnb”
• john* – Document search for items contain terms that start with john and is followed by zero or more characters such as “john,” “johnb,” and “johnson”
We are a 360-degree SaaS app development company that focuses on building high-quality web and mobile applications using the latest tools, frameworks, and SDKs. Our development team analyzes your project requirements and formulates effective strategies to create performance-driven applications that maximize enterprise benefits. We have successfully completed several full-scale SaaS application development projects with a focus on next-gen technologies. For project-related queries, reach us out at [email protected].
Cookies are important to the proper functioning of a site. To improve your experience, we use cookies to remember log-in details and provide secure log-in, collect statistics to optimize site functionality, and deliver content tailored to your interests. Click Agree and Proceed to accept cookies and go directly to the site or click on View Cookie Settings to see detailed descriptions of the types of cookies and choose whether to accept certain cookies while on the site.
About Author
Aquib Ahamad
He has good knowledge of RHCSA and RHCE and Aws git and bigdata. He is a quick learner who is always looking to learn new technologies