Whenever I bring up how to check whose logged into the system, the usual response is ‘just click the logged in users module!’. Well I’m here to tell you that it doesn’t work like that, and to explain why, and then give a way of actually finding out (best endeavour anyway, it’s not the most elegant of solutions I think!).
So first of all, examining the logged in user table within within ServiceNow you can see that it starts with a v_. Immediately this tells you it’s not a normal table (I think the v stands for virtual). If you were able to look at the actual database, you would see this table doesn’t exist. So what’s it displaying?
The table dynamically gets populated with users sessions. ServiceNow stores all of its configurations in database, but logged in sessions is actually part of the application layer and not the storage layer. As such, it’s reading the logged in sessions from the Java application. As the applications run on each individual mode, and the nodes themselves do not communicate with one another, all the logged in users table shows you is the logged in users on the application node that you’re on. Logging in to another node is a complete random hit and miss exercise of clearing your cookies and reconnecting to ServiceNow, hoping the load balancer sends you in a different direction to the node you want to be on. Not great then for finding out whose on your system.
What I found was that you can actually do a GlideRecord query on the logged in user table and it would return the results for the node that you’re logged in to. So the next step was how to make the GlideRecord run on each and every node to get the full picture.
The only way I found I could do this was by creating a scheduled job (sys_trigger one, not the other one).
The script in the scheduled job is a very simple GlideRecord:
var gr = GlideRecord(‘v_user_session’);
gs.log(“Logged in user: ” + gr.user);
This doesn’t do much other than log the logged in users, you can do something fancier, whatever it is you need to do with it.
The trick to getting it to run on all nodes is actually even simpler! On the sys_trigger table is a field called system_id. All you need to do is set this to All nodes. ServiceNow then takes care of creating a duplicate script to be run on all nodes, forcing the above GlideRecord to be run everywhere and returning your results.
Of course there’s no guaranteeing exactly when the jobs will run but usually they’re completed within a minute or two of the time being triggered. One of the things you could look at doing is having this run every 15 minutes and for it to write the values back to a custom logged in table if you wanted that level of details.
I actually needed this for the solution I spoke about in the previous post about adding/removing roles. When the admin users allotted role allocation had passed, I needed to log the user off the system. To do this, I had to find which nodes the user was logged in to (it could technically be multiple), so it would run this script and kill the admins sessions on all nodes.
A side note is that at first I started digging thinking there must be a way for the nodes to communicate to each other. The reasoning behind this was simple, when going to the system diagnostics homepage, I could see all the nodes and there was an attribute on each of the nodes saying logged in users. Therefore I concluded that the node I was currently connected to could reach out and grab this data from the other nodes.
Turns out I was wrong, which is why I’ve been led to the above solution. What ServiceNow does, is for the current node that you’re logged in to, it does the query against its active sessions, but for the other nodes, it reads from the sys_cluster_state table. This table stores the statuses of all the nodes on your system. One of the fields on this table stores an xml file with a number of diagnostics and statistics information. One of the values in that xml is the logged in users, so this is where the system diagnostics page gets its information about the other nodes from.