REST API script for Concepts and Demo sections
(e-Lab guide and “teaser” scripts are separate)

Use Cases

You might wonder “What are some of the business and technological reasons for using the REST API?” In other words, what are some of the things that could be usefully done programmatically with scripts or code?

You can integrate third-party network monitoring tools such as Splunk or Nagios to ingest RSA NetWitness Suite packet data.  You could integrate NetWitness data into third-party visualization tools.  You could programmatically generate .csv’s, spreadsheet, and charts every month from statistics pulled out of NetWitness.  You could report on events per second consumed across the enterprise, validate the packets are held for certain periods of time, or make better decisions about resource management based on measurable objectives in those statistics over time.

For example, when a company invests in an expensive tool such as the RSA NetWitness Suite, they must see value from it and be able to communicate that value to senior leadership. Here is an example of how this can be done using REST. Through visualization not included in the platform, you can use REST to pull capture rates of all the enterprise’s decoders, write them to a database, and be able to chart quarterly that the organization was, for instance, capturing 10,000 EPS but is now up to 50,000 Events Per Second.

This works for capacity management as well.  Let’s say that your organization experiences 10,000 EPS and has 30 days of raw log retention.  Then over the course of the fiscal year, if the retention went from 30 days down to 5 days, then next year’s budget may need to include additional DACs. The metrics that REST can provide can help your leadership make better informed decisions about the platform and possible upcoming gaps in order to spend their capitol wisely.

REST can help prevent costly mis-configurations. You can develop automated processes with REST to conduct configuration consistency checks across multiple disparate systems. For example, if you have five decoders deployed and a year later, your organization adds a sixth one.  How would you know if they are all configured the exact same way?  You could manually compare each one, point-for-point, using the GUI, but it may be more efficient to do a dump of a known good decoder’s configuration settings and write them to the new decoder. 

Does each decoder have the same Users?  Should they?  Is the capture rate set the same?

Create automated processes that can identify baselines for performance and tuning. 

REST can help you “compare apples to apples,” as your environment grows.

When one decoder is doing x and the other is doing y, it can impact the data very significantly.  By doing configuration checks and ensuring consistency you can eliminate the system as being the cause of data inconsistencies.

 

REST can be used for compliance as well over on the logs side of RSA NetWitness Suite.  This would be the SIEM component.  Organizations get audited by compliance and audit teams that say things such as “Here are 500 Unix servers that are in this PCI segment, show me that they are all logging.”

That could be a tedious task within the NetWitness UI from within the Investigation module. But using the REST log stats component of the decoder, that you can hook right into, you can script against it and pull all of the Host Names and IP Addresses that the decoder knows about, compare them to the list, find any gaps, and then work with the platform owner to remediate any of those gaps - so that those devices start logging.

This isn’t necessarily a “one-off.” Audits can become periodic and even frequent where platform owners can be asked to send up a Host list.  In this case, REST could allow you to hook into a CMDD or Asset Management Tool and just give it a dump of all the Windows Services that have come online in the last month.  Then, those can be added into the platform for logging. Now, through time, as new environments are added there is a programmatic way to 1. ensure that they are compliant, 2. speed time to remediation, and 3. better focus resources.

 

 

-----------------------------------------------    DEMO SECTION    -------------------------------------------------------

 

 

What Is the REST API?

Again, REST stands for REpresentational State Transfer and is a well understood way to interact with Web Services. Web Services are known to be easy to work with. That is why many systems that have a complicated and interdependent data set on the backend, choose to simplify its access by exposing it to platform owners, applications, scripts, and users through REST.

From a development perspective, the easiest thing to do is to talk to Web Services.  This is because it is essentially HTTP.  When you are “speaking” HTTP, you are, for the most part, doing a “get” when you want to “read” a metric.  Thus, to find the oldest packet time, you are just doing a “get”. We’ll see what that “get” method looks like from within a Python script a little later.

The way that you interact with the Web is thus independent from the technical complexities of the backend.  With HTTP, there is no need to have to know C or C++ or how a customized backend database works. Instead the backend and all its data is exposed in a very simple-to-understand Web language.

 

You can do an HTTP “get” if you want to read some information or you can do a “put” or a “post” if you’re looking to write some information.  So, a “put” could be used to upload a file such as a .pcap or to upload a custom feed. Whereas a “post” might, for example be used to set a configuration.  In an example that you’ll see a little later, we’ll look at how to do a “get” which is the most common use.

Let’s say, for a moment, that you have little or no concept of programming languages, you can use curl which is a simple and thin HTTP tool that you can use to interact with HTTP nodes. 

Demo: Using REST with curl

Since everything in REST is HTTP and HTTPS, we can start with a curl example and then work our way up to a Python example.

Let’s say, for a moment that we want to check “packet retention or log retention” in our implementation of an RSA NetWitness Logs and Packets environment in order to better understand how long or – for what duration - our organization is keeping packets or logs.  In other words, maybe there is a business requirement that we maintain them for one month.  How would we identify the state of our packet and log retention? And how could we do this repeatedly over time for all of our Packet Decoders and Log Decoders?

For this example, let’s use curl.  In this example, we can change out the dollar sign and enter the fully qualified domain name or the IP Address of the Host that we believe will contain that information.  For this tutorial, let’s check the Log Decoder – although this could be easily done on a Packet Decoder as well.

Next, you have the REST port, which is specific to the appliance-service type.  In this case the service type is that of Log Decoders. So the port number in this case will be 50102. After the port, comes the Node that you want to access.  It is shown as a piece of a string or a “path” if you will.  This is what we want to “get” from the Log Decoder. We are doing a “get” because we want to “read” this particular element.

From a UI perspective, just so that you can see, let’s switch over to the Explore View of the Log Decoder from within the GUI. 

 

In the Explore view of a decoder, we can go to the Node entitled database, and then to the sub-node entitled stats.

Then we can see packet.oldest.file.time

HIGHLIGHT packet.oldest.file.time<<<<<<<<<<<<<<<<<<<<<<<<<<

We are trying to “get” that value.

As an aside, you may ask “Why, within the Explore View, are parameters on a Log Decoder given names like “packet.oldest.file.time?”  When RSA purchased NetWitness, a lot of the naming conventions remained the same as RSA developed “NetWitness for Logs”, thus a ‘packet’ is synonymous with ‘log’ in the sense they are both ‘raw’ from the system’s perspective and distinctly differentiated from ‘meta’ which is common across both Packet and Log environments.

Regardless, we are trying to “get” a value.

And, we can force the format of the output that is sent back to us by the web service.  We can have it response sent as plain text, xml, or even json.  However we want to consume it can be controlled here in this request string.  We’ll see what these look like in a minute or two. 

Then we can also control the length of time at which our request will time out if it doesn’t run. In other words, we will not ask the web service to run the query forever.

Let’s take a look at an example of what a curl request might look like.

We’ll copy this string from our text document and paste it into the command window.

Next, let’s put in the <HOST> address which, in this example, is our Log Decoder.  In this case it is 172.16.198.3. Next we have the REST port that is specific to Log Decoders in general.

The next section, shown highlighted, represents the thing that we are looking to capture. “Slash database slash stats slash packet.oldest.file.time”.

/database/stats/packet.oldest.file.time 

Recall that this mirrors exactly what we saw in the UI.

We are sending the Log Decoder a “get” message and we are telling it “Hey decoder, when you output a response back to me, please put it in plain text.”

We are asking “What is the oldest packet file time on my particular decoder?”

When we press enter, you can see that it returns the date of 2016, November 10th and the time.

Now, using curl we can get that date but what happens if we want to manipulate that information.  In other words, the value is just displaying on our command line but we can’t do anything with it except maybe write it down.

Perhaps we want to convert it into the number of days elapsed so that we would know that the oldest packet has been retained for 30 days, for example.  In this case, we would need to do a little bit of development in Python or JavaScript or any language that you want to use… because it is a very simple HTTP conversation.

Now let’s shift our focus and look at another metric.

 

Another Example Using Another Stat

Let’s say that we wanted to “get” the capture rate at which our Log Decoder is capturing logs. In this case, the Host IP and port stay the same but instead of “Slash database slash stats slash packet.oldest.file.time” that we saw in the previous example.  Here, we want a different metric. Specifically, we want the decoder node, the sub-node entitled stats and the parameter entitled capture.rate.  We will send a “get” message of “slash decoder, slash stats, slash capture.rate”.

/database/stats/packet.oldest.file.time

/decoder/stats/capture.rate

Let’s copy that and paste it into our command line string. We are leaving it mostly the same but just changing the path that we are “getting” this time. We want to talk, not with the database but with the node entitled decoder and pull its stats for capture rate.  Since this is a tutorial environment, it return zero because the tutorial environment just imports a log file.  In a production environment, you could imagine that logs would be flowing in on a regular basis and this returned number would be higher than zero.

And, for business purposes, you might want to know what the capture.rate is over time in order to monitor peak times, capacities, and the overall flow. 

In this case, you could use a script to recursively run this string every 5 or 10 minutes, for example.  Then you could build a series of metric points that you could then put into a .csv and the .csv could be the input to a charting tool.

 

Now let’s take a look at what would happen if we change the output type. Let’s change it from plain text to xml.  When we press return, you can see that it applies the mark up formatting.  It is the same value or date but if a developer wanted to deal with the data set in xml format, they could do so. 

Let’s change it to application json.  Now we get the outputted response in json format.  Various libraries are available to parse xml or json. Plain text however, may suit you fine unless you have a large list of data.  We’ll see the output for that in a few moments.

Curl is great if you just want to quickly do an ad hoc query or maybe just run a CRON job every 5 minutes

And you are not constantly refreshing the page or refreshing the page across multiple different devices.

Let’s look at another way that we could run our “get” request in a command line.

We’ve been looking at a one-time “get” request but let’s say that you want to get a particular value from not just one Log Decoder but from all 5 of your Log Decoders or maybe from 50 of them.  You could manually do a curl on each and every one or, you could use REST programmatically.  Let’s see how this could work.

Let’s say that you have the IP Addresses for all of your Log Decoders in a .csv file on disk. You can see that we have a file here called “devices underbar decoders dot csv”. For this tutorial all of the devices have the same IP Address. But, obviously, in the field they would all be different.

devices_decoders.csv

**************************************************************************************************************************************************************************

Now let’s run a “for” loop on our command line

0.22.52.26

And we can copy our curl command from before and add it to the “for” command

for i in ‘cat devices_decoders.csv,

do the following -- In this case, run the curl command using the IP Address of each Decoder.

Here, we’ll put our curl command but, instead of a hard coded IP Address of that single Log Decoder, let’s put in a dollar sign as a variable.  Now this (PETER HIGHLIGHT -- $i   ) will get replaced by every IP that is listed in our .csv file.

In other words the “for loop” will do a “get” on the first IP listed in the .csv file, and then on the second IP listed, and so forth.

When it is “done” it returns all five that were in the .csv.

We can change the output from xml to plain text.

 

When it is done, it returns the results of each of 5 IP Addresses.  So if your .csv had 50 distinct IP Addresses, it would return 50 values here.

You can see how using REST has saved some time when compared with having to go through the UI to get all these values.

Simple Python Script Example

Now let’s take a look at a very simple Python script entitled “rest_test_pkt_retention.py” that pulls the oldest packet file time.

This script is for tutorial purposes and has no real error checking or extras.

Let’s walk through it.

Remember earlier, we mentioned how a Developer and a script can “talk” HTTP.  This section of this Python script shows that very concept.

Python is a relatively easy to use language, which works well with text, and thus a lot of people use it.

We have a very simple method here called “httpGet” and we are using a simple Python module called “requests” that will “get” this URL that we have formatted. All these strings are going to get replaced by things that we will pass in.  This script is very simple and you would not normally hard-code in a password as we have done here, for tutorial purposes.  Normally, of course, it would need to be hashed or something along those lines.

Running this Python does the same thing that we did with the curl because, in this script, you can see that we are only printing the contents of that return where it says print.r.content.

However, because the returned value of the oldest packet file time is now in a script, we could easily write another method that could manipulate that data, for example, and convert it into days elapsed.  Then we wouldn’t have to look at November 1, 2016 and calculate how long the file has been retained and then go back and re-calculate for 50 decoders or recalculate again every week thereafter.

Now let’s run that Python script that will leverage the same http “get” that we did earlier using curl.

Again, since this .csv is contrived for tutorial purposes, it displays the same information five times, but you can see that it provides each of the decoders within that .csv and the date/time associated with that particular decoder for its associated oldest packet file time.

 

Configuration Consistency

Environments change over time. For example, many organizations that use NetWitness Logs and Packets modify their “index-concentrator-custom.xml” file for new Meta Keys over time.  You can have multiple administrators managing the same system.  Maybe somebody made a change but forgot to replicate it on other Concentrators.

 

Let’s take a look at a script that checks on one of the Nodes of the Concentrator for what language it understands.  You could quickly use this to check and validate what all of the other Concentrators are set to and make sure that critical configuration items such as your keys, your descriptions, your format, your level, and your Value Max settings are consistent across all Concentrators.

If they are not consistent, they may get filled up or affect the data negatively.

So how do you make sure that a very important piece of configuration data is set properly across the enterprise?

Do you just spit it out for all of the components in the system and eyeball it? Or, do you write a script that pulls it and does all the “dirty work” for you of validating the consistency?

The latter approach can prevent a lot of problems from a mis-configuration perspective.

You might ask “What is an example of a critical parameter value that must be set correctly?” and “Why might it be critical?” One example, is the language that the Concentrator understands. In a concentrator, when you have a custom xml, let’s say that you take a Meta Key like “action” and you over-write “action” so that 10,000 values can go into it, for example. If Concentrators A is set to 1,000 and Concentrator B is set to 10,000 then Concentrator A will fill up its 1,000 buckets and will then stop populating that meta key with new values for that index slice. But note that the data is stored in the metadb, just not directly searchable.  Now, you’ve got once Concentrator that pulled in 10,000 values, that it saw in that time frame, and the other one stopped at 1,000.  Systemically, you are not going to know that anything is wrong, because that is a quote unquote “valid” configuration.  However, it is a mis-configuration because of the consistency issue. But it’s not a systemic formatting problem.  Thus Analysts are looking at data and they may not be able to see particular pieces of data and they may not have any clue that the data even exists.

Let’s talk about another situation.  Let’s say that you have “email” set to two and a half million and on another one you have it set to two and a half billion, then that Concentrator is going to be very busy and could potentially impact storage and could impact performance. 

Consistency is key. When you make a change you need to test it and make sure that it is replicated across the board. How do you validate that the work is done? REST is a very good tool for that.

 

Let’s look at the script for a moment.  Notice that -very little in the script- has changed except the Node that it was pointed to.  And, you can see that it is now pointing to the Concentrator REST port so that we can “get” a value from the Concentrator service type. The script then indicates that it wants to go to the Node entitled “index” and that the message that will be sent to the Concentrator is requesting the language.

(HIGHLIGHT ON msg=language)

(HIHGLIGHT f=open(‘device….   )

This line opens up alpha reading and then it iterates through each line, stripping the new line and then sending that line (which happens to be a Host Name) into the HostName parameter here. Then this gets populated right in here…

(HIGHLIGHT ON %$)

The URL or path will not change from Concentrator to Concentrator so this URL

HIGHLIGH (‘index?msg…) will stay the same.

So, nothing will change except this

HIGHLIGH ON (line, ….)

Since the IPs are usually different for the various Concentrators in the environment, we had the .csv with the line-delimited IPs of the various Concentrators.

Then we send the Port, and then the Path that we are looking for in the REST API.

 

Now that you’ve seen how a script can programmatically get information through the REST API, you can imagine how that script might “get” a parameter and do something with it.

Security of the REST API

Let’s talk about some security considerations when using REST.

REST API ships with the product and is set, by default to enabled! Obviously, you will need to change the default credentials.  It is also set, by default, to use HTTP.  Setting it to HTTPS requires an extra step.

Since communicating with REST is over HTTP, the values occur in cleartext.  This means that if you, for example, used REST to change a password, that password would be communicated in cleartext over the wire. It also means that you will need to protect the single-factor authentication or REST.

Changing to SSL has minimal or no impact on performance. Version 10.3 used REST for queries but in 10.4 and above, nothing else in the platform uses REST. In other words, Investigator in 10.4 and above uses native ports and REST ports are only used for REST.

If you are going to operationalize the use of REST in an organization, the first thing that you should do is change it to HTTPS. And, obviously, change the default password regardless.

Then you can authenticate all of your scripts using HTTPS

Let’s see how to change REST to https.

We’ll log into the NetWitness UI.

When a company first sets up the product, the first thing that you should do is change the default admin username password. You can set up a service password. You can change the admin password to an obscure account and then create a new service account that is used specifically for REST and then just put that user on all the REST services.  A naming convention might be a company name underscore REST for example, so that whenever queries are happening on the platform, you will know that it is a user versus a nefarious automated too. 

Let’s see how this would work. We’ll change the Concentrator to HTTPS and then add a user and then we can update our script to use that user. 

Let’s go into Services.

0.39.02.04

Let’s change to HTTPS on the Concentrator service.  However, since REST is controlled at the service level, you would need to perform this on the Brokers, Concentrators, Log Collectors, and Decoders as well. The process is the same, so, for tutorial purposes, let’s change the Concentrator to HTTPS.

First we’ll go to that service, and then click View Explore.

Then we’ll go to the Node entitled REST and then to the subnode called config.

Change “ssl” from off to on.

This requires a service restart at this point.

We’ll open a command window and log on as root.

Next we’ll stop the Concentrator service and restart it.  

Now let’s go over to a web browser and do an http request in order to interact with it, we get “no data received.” So it is working.

When we do an HTTP*S*, there is no certificate and, for this tutorial that is fine, but now we get challenged for a username and password, …and now that we can see the Nodes, we are back in using HTTPS. And, we can no longer use HTTP to interact with that same device.

0.40.50.15

Create A New User Account (same vid on security)

Next, if we want to create a new user account, we can go to Explore > Security of the same Concentrator and create a new user account by clicking the Plus icon. We’ll give it a name of “service underscore rest”.

Then we’ll complete the various fields….. Next, we’ll select an Authentication Type of NetWitness.  For this tutorial, under Role Membership, we’ll assign it to the Administrators group. But, you could take the additional step of creating a different new Role that only has access to certain things. For this tutorial however, we will not get too granular.

We’ll click Apply.

The REST service account is now enable dover HTTPS. We have a REST user with an account. 

41.49.10 >>

When we attempt HTTP we get no data as expected.

Now, let’s use a different browser to test HTTPS because our information might be cached in the old one.

 

Let’s type in the URL to that Concentrator but over HTTPS this time.

We get the same certificate warning. Then we enter in our credentials with “service_rest” as our user and the password and we are back in using that new account.

Now as an Administrator, when we are reviewing the logs of this platform, we can effectively start de-commissioning any further uses of the admin account.

As we consistently decommission the use of admin, if we spot its use in the logs we can identify if this is a legitimate or a nefarious use.  We can then compartmentalize all the users and roles to the appropriate levels and responsibilities and once we have isolated the account for REST, we know that we should never get a REST call from a particular service unless you have control over it.

Now we can avoid running into issues of default passwords not having been changed and problems with cleartext running across HTTP and potentially being sniffed.

 

Using REST – the Analyst Perspective

Now that we have talked about programmatically using REST from an administrative perspective, you can also use REST to pull logs or packet data for use by Analysts and or for 3rd party tools that will be used by Analysts.

For this tutorial, let’s look at a Log Decoder and see how we can pull log data using REST.

Here, we will look at a script called “nwgetlogs.”  It has a two-phased approach.  Let’s look at the data-set for a moment and then the script’s approach will make more sense.

We’ll go to Investigation > Navigate. We’ll load the values and look at the last 24 hours’ worth of data.

Under the meta key entitled Message ID, notice that one of the values is “get”

When we click on “get” we can then see that there are 3,7118 log messages.  We can poke around and look at “get” messages for our proxy logs.  We can come in here and do a View Log and see its meta.

But, what if we wanted to run this query externally?

 

HIGHLIGHT msg.id=’get’

Perhaps this is a feed mechanism for something else. Where, for instance, a 3rd party tool needs the results from this query.

In other words, your business requirement is to have a script or automated interface to Investigator. We want to be able to get this data without human interaction.  This message ID “get” is a very important query.

Let’s examine the “nwget-logs.py” script in a command window.  The parameters have been defaulted within the script so that we don’t have to enter them in on the command line. However, this script is going to run a default query of “message ID equals get”.

 

It is the same thing that we just did in the UI that gave us all that data.

 

For tutorial purposes only, this script is written for HTTP.

 

When we run this script we can specify a limit.  In this case the limit is 2.  The script, ran through the entire timeframe, did the query, and selected some session IDs and then started combing through those session IDs.

With our limit, we are telling the script that, within that timeframe, “I want you to run a message ID and go get and give me back two logs.”

Let’s say that somebody wanted to take a look at those two logs for that particular query.  You can see the information here.  There is a CISCO IronPort message.  When we type “cat 15.raw”, we can see another one as well.

You can see the power of the REST interface:  Without interacting with the UI at all, without being an investigator, we were able to interact by using the appropriate credentials for the right account, and we are able to pull the meta and the values for logs. 

One thing of note is that Role-based access controls are still in affect with REST.

So, what happens if a user is pigeon-holed into only seeing a certain data set in the UI?  That same user will be limited in the exact same way here in REST. And this will apply across the board to users.

In other words, if while using REST, it returns *no* results to you, you might want to identify what access controls are in place for the particular user making the query.

If you did not have access to IronPort data because you are in a group that only can see firewall logs, then running this query would return nothing.

 

How to Find a Metric of Interest with REST

REST gives you administrative control, analytical control, read and write control over most of the platform.

Let’s say that you don’t know where a particular metric is within REST.

52.53.18

In other words, you are using REST, but you don’t know which Node, which message, and which parameter will provide you with the information that you are looking for.

53.28.22

One way is to poke around and click into the Node that you think might be appropriate.  Maybe click on stats, look at the decoder node, look at the database Node, and so on.

But another way to look at it is this.  Let’s say, for example, that you are looking at the stats Node but you don’t really want to scroll through and look at all the messages.

Instead, you can use the “depth” argument. All Nodes have a “depth” command.

If we were to do a “depth equals, let’s say, 5”.  This would drill all the way down the hierarchy. Let’s copy and paste the query URL that was built into a browser because it will be a lot easier to see.

We just ran a depth of 5 and there, is our entire hierarchy.

Under “config”, we have all of these.

Scrolling down a bit, under “stats”, we have all of these.

You can literally go to a high level, run a depth of, say 5, and go through all of these items. This way, you’ll get an understanding of where every single metric is located.

Some of the Nodes are not that deep. “Index stats” is not that deep. “Index config” only consists of these.

So, on a Concentrator, under devices, you have each device, and then each device has stats, so it can get rather deep.

But the beauty of this, is that, if you just want to go to all of your devices, you can click the asterisk, do an “ls”, and then put in a depth of say, 10 and click Send. There they are, in the Output.

Let’s move it to the URL to see it better.

If you had 15 decoders aggregating to one single Concentrator, it would show each of those with a port and then the config Node for each, and then the stats for each and so on.

If you want to do a data dump, you can see, not just to find where this particular backup Node is or what a session rate is, but it also what the current value is.

If we keep refreshing this browser, then the values will change.

So, you are not necessarily targeting a particular query parameter value but, instead are dumping it all.

Summary

Let’s summarize what we talked about. 

We have talked about REST being an interface into the data set that is exposed to a user regardless of how complicated the database may be where the database is driving and using that data on the backend.  REST gives you metrics on how the system is performing.  From an analytical perspective, and, from an administrative perspective, REST provides the data to you.

We talked about different ways to interact with REST. One way is to use the NetWitness UI’s and go to View>Explore for each appliance service type.

Another way is to use the URL and browser and by pointing the URL to a certain URI, you can use the GUI of the REST Interface to interact with the data.

Yet another way is to use curl for a way that provides quick, ad-hoc access without the ability to manipulate that data beyond that point.

We saw how to use curl and “for” loops to quickly do an ad-hoc data dump of a metric that may be important at a given time.

We also saw how to automate that URL using a script or specifically using the request module within Python. Here we saw how to use a simple HTTP “get” command for a particular REST Node that would also then get that data.

The importance of having the script model is that, when the data is returned, you can take some action on it.  For example, when we ran curl on the packet time we just got the date and had to manually convert that data to how many days it had been since that fixed date.  Thus it would yield the amount of retention.  In a script, however, you can take that data, modify it, rinse and repeat.  No matter how many times you run that script, you will always get the amount of retention without having to manually do that calculation.

 

That was a more administrative use of REST but we also talked about using REST for analysis purposes.  In this tutorial we demonstrated the analyst perspective by looking at log meta data and values.

As an example, we did a simple query of message ID equal “get” and we pulled out two sessions there to show that log data can also be pulled through the REST interface. But we just as easily could have done so by pulling packet meta data and values.  We could then pass these to a 3rd party product if we wanted to do so.

We also looked at how to inspect keys for configuration consistency.  This is a big ticket item because, as environments grow, entropy grows into existing environments, new devices are brought in that may not have the same parsers or particular configuration values assigned to them that should have been assigned.

REST could be very useful in comparing a known good device versus a net new one in order to ensure that, when it is brought onboard, it is healthy and consistent. 

 

We also showed how to enable HTTPS for security purposes.  Because it comes enabled by default for HTTP, especially when using REST for automation, it is a best practice to use HTTPS.  We then showed how to create a service account that lives on the service and that would need to be replicated across all of the services that you intend to interact with; Decoders, Concentrators, Brokers, and so on across the entire environment so that you are not sending usernames and passwords over cleartext especially in an automated fashion.

One of the tenants of security is to control roles and responsibilities. If you have a REST user, they certainly don’t need admin access, maybe you will deem that they are not to “write” to those nodes. And they have a “read only” access, for example.

And, when you are reviewing logs you will need to review the use of admin and make sure that there are user controls in place. But, when you segment that use out to a REST user, then you will know what is, and what is not expected behavior. Then if you see the REST user logging into the UI, that could, for example be a security threat.

That concludes this video.