RSS

PAM Responder

PAM responder always starts by determining the user’s group memberships. It does this by internally calling initgroups on each domain stanza, until it finds a match. Once a match is found, the PAM Responder knows which domain to use, which identity to use, and the groups to which the identity belongs. In our use case, there is only a single domain, so if calling initgroups against our domain fails, then the whole client request fails. Note that the presence of subdomains makes this more complicated, but that has been discussed earlier in the document.

The PAM Responder’s context (pam_ctx) is created at startup by pam_process_init(), which takes several actions, including:

  • calling sss_process_init with Responder-specific arguments, including supported commands
  • initializing Responder-specific optimizations (see Optimizations section)
  • retrieving Responder-specific config information from the confdb
Data Flow (PAM Responder)

This diagram shows the data flow generated by an SSS Client Application making a PAM request to SSSD

  1. SSS Client Application’s request is handled by our dynamically loaded PAM Client Library, which sends request to matching PAM Responder.
  2. Like the NSS Responder, the PAM Responder sends getAccountInfo request message to Backend, but only to ask it to update Cache with client’s group memberships (i.e. initgroups)
  3. Backend uses AD Provider Plugin to make LDAP call to remote AD Server and to retrieve response.
  4. Backend updates Cache, and also sends getAccountInfo response message (containing status) to PAM Responder; this also serves as indication that Cache has been updated.
  5. PAM Responder reads updated initgroups information from Cache.
  6. PAM Responder sends pamHandler request message to Backend
  7. Backend uses AD Provider Plugin to retrieve response from Child Process, which makes the actual KRB calls; note that the Child Process (not shown) will be discussed later in the document
  8. Backend sends pamHandler response message (containing status) to PAM Responder
  9. PAM Responder returns updated result to PAM Client Library, which passes it to SSS Client Application.

Difference between PAM and NSS:

1. PAM Responder’s data flow is different from the NSS Responder’s data flow. The primary difference is that the result of a pamHandler request is not stored in the Cache. The pamHandler response message contains status information, most of which is passed back to the PAM Client Library.

2. NSS Responder sends the Backend only a single request message, corresponding to the SSS Client’s request. In contrast, the PAM Responder sends two request messages: the first one to find the client’s group memberships, and the second one corresponding to the SSS Client’s request.

3. PAM responder always downloads the group memberships from the server (if reachable) even if the cache is up to date. This is to ensure correct authorization data on login, because group memberships are set on login on a Linux system.

Let us talk about more intricate details of PAM in the next post!!

References: https://fedorahosted.org/sssd/wiki/InternalsDocs#a7.5.PAMResponder

 
1 Comment

Posted by on March 7, 2014 in Uncategorized

 

Continuation: A overall view of NSS

Last time we took code snippets and had code specific understanding about NSS. This time lets sum up NSS responder as a whole.

NSS: The Name Service Switch (NSS) is a facility in Unix-like operating systems that provides a variety of sources for common configuration databases and name resolution mechanisms. These sources include local operating system files (such as /etc/passwd, /etc/group, and /etc/hosts), the Domain Name System (DNS), the Network Information Service (NIS), and LDAP.

NSS Data flow:

This diagram shows the data flow generated by an SSS Client Application making an NSS request to SSSD.

nss_ctx

The NSS Responder’s context (nss_ctx) is created at startup by nss_process_init(), which takes several actions, including:

  • calling sss_process_init() with Responder-specific arguments, including supported commands and supported SBus methods
  • initializing idmap_ctx
  • initializing Responder-specific optimizations (see NSS Optimizations section)
  • retrieving Responder-specific config information from the confdb

Client-Facing Interactions:

The commands supported by the NSS Responder are defined in nsssrv_cmd.c. These commands (and their inputs) are extracted from the packet sent to the Responder by the SSS Client. After processing the command, the NSS Responder returns a packet to the SSS Client containing command output and/or an error message.

Backend-Facing Interactions:

The NSS Responder communicates with the Backend using a single SBus method named getAccountInfo. For getAccountInfo, the outgoing SBus request message is constructed by sss_dp_get_account_msg and “sent” by sbus_conn_send. The incoming SBus reply message is “received” by sss_dp_get_reply.

Complete Data Flow as such:

NSS Responder reads a packet from the client socket, processes it and  writes an SBus message to the backend socket. Later NSS Responder reads the SBus message reply from the backend socket, processes the reply and writes a reply packet to the client socket. To conclude the complete working, it goes as following:-

1. SSS Client Application’s request is handled by our dynamically loaded NSS Client Library, which consults the fast cache. If valid cache entry exists, NSS Client Library immediately returns cached result to SSS Client Application.

2. If no valid cache entry exists in fast cache, NSS Client Library sends client’s NSS request to matching NSS Responder.

3. NSS Responder consults Cache. If valid cache entry exists (unexpired), NSS Responder immediately returns cached result to SSS Client Application (this step not shown above)

4. If no valid cache entry exists, NSS Responder sends getAccountInfo request message to Backend, asking Backend to update Cache with data corresponding to client’s NSS request.

5.Backend uses AD Provider Plugin to make LDAP call to remote AD Server and to retrieve response from AD Server.

6. Backend updates Cache, and also sends getAccountInfo response message (containing status) to NSS Responder; this also serves as indication that Cache has been updated.

7. NSS Responder reads updated result from Cache.

8. NSS Responder returns updated result to NSS Client Library, which passes it to SSS Client Application.

Next thing will be to talk about PAM responder, bit complex than NSS but interesting :)

Reference: http://en.wikipedia.org/wiki/Name_Service_Switch

https://fedorahosted.org/sssd/wiki/InternalsDocs

 

 
Leave a comment

Posted by on February 16, 2014 in Uncategorized

 

NSS Responder

There are mainly two responders in SSSD: NSS and PAM.

The role of a Responder is:

  1. it receives request messages from a matching SSS Client
  2. fulfills the requests in one of two following ways:
  3. Either directly retrieving a valid cached result from the sysdb Cache, or
  4. Or asks the Backend to update the sysdb Cache and then retrieves an up-to-date result from the sysdb Cache.
  5. sends back response messages to the matching SSS Client

The NSS server consists of two major task of: the NSS client and the Data Provider. The NSS client requests data (request for user by name or by id etc )and receives the result from the NSS responder.

A very simple way to understand the flow can be following: As explained by my mentor (Jakub Jhrozek) :)
“The Data Provider can be though of as “the server”. It is the component that is called when there is no data available to the NSS responder. Maybe it would be easier to grasp how the NSS responder works with a
mini-algorithm:
0. Request comes in to gather data about an entity. This is
simulated in the test by calling
will_return(__wrap_sss_packet_get_cmd), in real world the function
sss_packet_get_cmd is called.
1. The NSS responder checks if the data is available in the cache by
calling the sysdb functions.
1a. If the data is available in the cache, it is returned. The
request ends, go to 2.
1b. If the data is not available in the cache, the Data Provider
is asked for the data. Execution waits for the Data Provider to
finish and then returns to 1.
1c. If the data is not available in the cache and the Data
Provider was checked already, set negative result and go to 2.
2. Result (positive or negative) is returned to the client. “
Now, let us take a function in order to understand its working:
/* Testsuite for getuid */
static void mock_input_id(uint8_t *id)
{
will_return(__wrap_sss_packet_get_body, WRAP_CALL_WRAPPER);
will_return(__wrap_sss_packet_get_body, id);
}
static int test_nss_getpwuid_check(uint32_t status, uint8_t *body, size_t blen)
{
struct passwd pwd;
errno_t ret;assert_int_equal(status, EOK);ret = parse_user_packet(body, blen, &pwd);
assert_int_equal(ret, EOK);assert_int_equal(pwd.pw_uid, 101);
assert_int_equal(pwd.pw_gid, 401);
assert_string_equal(pwd.pw_name, “testuser1″);
assert_string_equal(pwd.pw_shell, “/bin/sh”);
assert_string_equal(pwd.pw_passwd, “*”);
return EOK;
}
void test_nss_getpwuid(void **state)
{
errno_t ret;/* Prime the cache with a valid user */
ret = sysdb_add_user(nss_test_ctx->tctx->dom,
“testuser1″, 101, 401, “test user1″,
“/home/testuser1″, “/bin/sh”, NULL,
NULL, 300, 0);
assert_int_equal(ret, EOK);uint8_t id = 101;
mock_input_id(&id);
will_return(__wrap_sss_packet_get_cmd, SSS_NSS_GETPWUID);
mock_fill_user();/* Query for that user, call a callback when command finishes */
set_cmd_cb(test_nss_getpwuid_check);
ret = sss_cmd_execute(nss_test_ctx->cctx, SSS_NSS_GETPWUID,
nss_test_ctx->nss_cmds);
assert_int_equal(ret, EOK);/* Wait until the test finishes with EOK */
ret = test_ev_loop(nss_test_ctx->tctx);
assert_int_equal(ret, EOK);
}
Let us look into the above function test_nss_getpwuid()
1)
/* Prime the cache with a valid user */
ret = sysdb_add_user(nss_test_ctx->tctx->dom,
“testuser1″, 101, 401, “test user1″,
“/home/testuser1″, “/bin/sh”, NULL,
NULL, 300, 0);
We are adding a valid user to system database.
2)
mock_input_id(&id);
will_return(__wrap_sss_packet_get_cmd, SSS_NSS_GETPWUID);
mock_fill_user();
Here, we are creating dummy packet with the above statements. Our test driver will be named __wrap_sss_packet_get_cmd() and this will replace the original sss_packet_get_cmd() function. We use __wrap since a linker flag makes it easy to “wrap” calls when named starting with “__wrap”.
will_return(function, value) : This way “value” is enqueued to the queue of mock values.
mock() : Likewise with mock() call, one value is dequeued from the mock value queue. We are writing mock of the original function, which instructs what value to be returned once mock is called in the function. mock_input_id() that will instruct __wrap_sss_packet_get_body() mentioned above, to return a uint32_t in the buffer.
3)
/* Query for that user, call a callback when command finishes */
set_cmd_cb(test_nss_getpwuid_check);
ret = sss_cmd_execute(nss_test_ctx->cctx, SSS_NSS_GETPWUID,
nss_test_ctx->nss_cmds);
With sss_cmd_execute we tell program to ‘execute GETPWUID‘ and when that is ready, call test_nss_getpwuid_check(), written above. The function nss_cmd_getpwnuid will then get executed and it will read the data we prepared with mock_input_id(). When whole processing finishes, the callback test_nss_getpwnam_check will get executed.
Let us discuss more about other functionalities it in upcoming posts :)
 
Leave a comment

Posted by on January 28, 2014 in Uncategorized

 

Somewhat about Tevent Context!

Tevent context is a handler that describes an instance of the ‘tevent’ event library. Thus to work with this, first we need to allocate some memory say “memctx”. Now that we have space allocated, we will put our tevent_ctx pointer here. To deal with the events to be caught and handled they are first required to be included in this particular context. Reason for subordinating events to a tevent context structure rises from the fact that several context can be created and each of them are processed at different time. Thus we can maintain different context for different events. For example: we can have one context containing just file descriptor events, secondone taking care of signal and time events and the third one which keeps information about the rest etc.

// A little example:

TALLOC_CTX *memctx = talloc_new(NULL);
assert_non_null(memctx, NULL);
struct tevent_context *ev_ctx = tevent_context_init(memctx);
assert_non_null(ev_ctx, NULL);

The Diagram below explains the idea clearly: Taken from the source mentioned below.

tevent_context_stucture.png

Source : https://tevent.samba.org/tevent_tutorial.html

 
Leave a comment

Posted by on January 26, 2014 in Uncategorized

 

Negative Cache

My last module work was on negative cache. While writing unit test for the module I gathered some knowledge about negative caching. Let us have a brief look into it!

Simple Defination:

DNS caching of unsuccessful name resolution attempts is called negative caching.

Elaboration:

Let us understand it more elaborately. Resolver receives positive or negative responses to different queries. Accordingly it adds the respective response to its cache. The resolver always checks the cache before querying any DNS servers. If a name is in the cache, the resolver uses the name from the cache rather than querying a server. Since there is no resource record for an invalid name the server itself must decide how long to cache this negative information. Thus if a negative response is cached for a query it does not try to resolve it later rather sends the same response as a result of the query asked again.

Use of Negative Cache:

Negative caching is useful as it reduces the response time for negative answers. It also reduces the number of messages that have to be sent between resolvers and name servers hence overall network traffic. A large proportion of DNS traffic on the Internet could be eliminated if all resolvers implemented negative caching.

The ISPs which have multiple DNS servers take advantage of negative response and other response type cached to distinguish between different DNS servers. The failed DNS entry remains as long as the TTL is set. After TTL, DNS server is requeried because after this particular time limit the response entry will be dropped from the cache. But, the negative response still persists in the cache. That negative response prevents the machine from asking that particular DNS server, and instead it asks another server. This prevents a timeout error if that machine fails to respond. Even the negative response will be dropped later. Later DNS server which issued negative response may update itself by then.

 

 
Leave a comment

Posted by on January 10, 2014 in Uncategorized

 

It is always good to have backup!

I am sure many of you are familiar with Git version control system and github. Still I wish to share some very basic steps to keep a backup of your data. To put your data on github.com

Firstly you need to have an account on github.com.

1) This step is very simple. Just go to github.com and make an account by signing in there.

2) Just beside your username you can find a symbol, which helps you make you repository. Click on the icon. Write the name of the repository in the space provided under title “Repository name”, say SSSD. You can also write a simple description about the repository in the space provided under title “Description”.

3) Click on the button named “create repository” once 1 and 2(optional) step(s) are done.

Secondly you need to be familiar with some basic git commands. Lets us try putting a file on github.com as an example to understand it better.

1) Go to your terminal. Type : sudo yum install git (for fedora distribution) or type: sudo apt-get install git (for ubuntu users)

2) Now let us configure it.

git config –global user.name “user name”

git config –global user.email “email@whatever.com”

git config –global color.ui auto

3) Let us make a directory where we will have our file.

mkdir dir_name (say,  mkdir testgit)

4) Type command : cd dir_name (say, cd testgit) and get into the directory.

5) Here one thing is to be done. We need to initiate a git repository to have git discovery across file  systems.

So type the command : git init dir_name (say, git init trygit)

This command creates an empty Git repository – basically a .git directory with subdirectories for objects, refs/heads, refs/tags, and template files. An initial HEAD file that references the HEAD of the master branch is also created. More reference can be found here : http://git-scm.com/docs/git-init

6) Let us now make our file.

type: vi filename.txt2 (say, test2.txt)

7) Write some text (say, hello world) in file and save it.

See the status by typing: git status

you will see some msg like this:

#   Untracked files:
#   (use “git add <file>…” to include in what will be committed)
#
#   test2.txt

8) Once you make a file you need to add it keep track of your file and its changes.

To add file type: git add filename (say, git add test1.txt).

After adding you can see the status by typing: git status . The will see msg like this:

# On branch master
# Changes to be committed:
#   (use “git reset HEAD …” to unstage)
#
#    new file:   test1.txt

9) Now we are going to commit the file.

type: git commit filename.txt (say, git commit test1.txt)

After this, a file will open where you can write a message and save it. Later you will see something like   this on terminal

[master a98588d] testing git
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 test1.txt

10) You can see all the the commits by using command: git log

11) Now let us push the file.

type: git push -u where what

“where” is the place you what to push the data. So for this copy paste the url where you had created the git repository on github.com.

“what” is, what you are pushing. Here in our example it is master.

so the command for this example will be: git push https://github.com/your_git_username/trygit.git master

Note: The name of repository can be different from the one we initiated in step 5.

12) As soon you are done with step 11 you will be asked for github.com username and password. Enter both and yipee we are done :)

Now you can check the pushed file in your repository on github :)

 
1 Comment

Posted by on December 27, 2013 in Uncategorized

 

Getting selected for Outreach Program for Women (December2013- March2014)

I am bit late in posting this beautiful news :). As its said “better to be late then never” let me share one of the best experience of my life. I came to know about OPW when I was in second year of my B.Tech. I applied for it in the same year as well. I could not make it but definitely learnt a lot from the experience.

1) Searching about all the participating organizations, https://wiki.gnome.org/OutreachProgramForWomen

2) Going through projects on different platform. Selecting project.

https://wiki.gnome.org/OutreachProgramForWomen/2013/DecemberMarch#Participating_Organizations

3) Talking about project on their respective IRC(Internet Relay Chat).

4) Getting mentor , getting his/her guidance (actually most of the IRC folk help as much as they can, that was the most appealing thing which stuck me about IRC).

This year I applied again taking Fedora as my project organization. Name of my project is “unit test SSSD”, references to which are:

a) https://fedorahosted.org/sssd/

b) https://fedorahosted.org/sssd/wiki/DesignDocs/TestCoverage

I was really lucky to have Jakub Hrozek, https://fedoraproject.org/wiki/User:Jhrozek as my mentor. He is very good and approachable. I could do my initial contribution to the project very efficiently with his guidance. Marina Zhurakhinskaya , Lukas Slebodnik, Sumit Bose and whole team of Fedora as well as Gnome were really helpful. Getting into OPW means a lot to me. I am sure to learn a lot (which I am!!). At the same time I wish to be an asset to the organization. Right now I am working on my December task. Will share more about the project in the next blog…. :)

 
1 Comment

Posted by on December 22, 2013 in Uncategorized

 
 
Follow

Get every new post delivered to your Inbox.