# Recommender Systems: The Latent Factor Model and Matrix Factorization

Recommender Systems try to find recommendations for users, based on what we know about their “preferences” (whatever that means), i.e. items that the user might be interested in. The basic idea of recommender systems is that the user shows his preferences e.g. by rating items that he has viewed.

One approach to find appropriate recommendations relies on the idea that people with similar preferences buy or watch the the same items. This is the neighborhood method: We simply try to find users that expressed similar interests and recommend items that these users also found interesting.

Another approach, namely the latent factor model, is useful if we have a way of rating how well a user likes a given item. The most famous example is the “Netflix” homepage, where users can explicitly rate movies they have seen. The introduction of the first approach is based on the article Matrix Factorization Techniques for Recommender Systems by Koren, Bell and Volinsky.

# Matrix Factorization

The preferences of each user ucan be expressed as a “latent factor vector” $p_u$ in a “feature space” $R_f$. The item $i$ on the other hand bear certain “features” $q_i$ in the feature space $R_f$, so that the rating that we can except the user $u$ to give to the item $i$ is approximately $\hat r_{ui}=p_u^T q_i$. For a given set of ratings, this approach finds sensible predictions that users will give to items that they do not know yet, if the dimension of the feature space is chosen sensibly.

So far, we have the feature space $R_f$, in which the ratings $\hat r_{ui}$ are expressed as the scalar product of the “latent factor vectors” of the users, $p_u$, and the items, $q_i$. The user vectors characterize the user interests, while the item vectors characterize the item features. The word “latent” indicates that the factors that the users are interested in and that the items bear do not have an explicit meaning a priori, it is an axiom of this approach that they can be found in the space $R_f$, so that they lead to useful predictions.

For 4 users and a feature space dimension $f$ of 2, the user matrix $P\in R^{f\times n_u}$ has the form $P = (p1, p2, p3, p4)$, with $p_u \in R^2$ for $u=1,…,4$.
The item matrix $Q \in R^{f\times n_i}$, for 6 items, is $Q = (q1, q2, q3, q4, q5, q6)$, with $q_i\in R^2$ for $i=1,…,6$.
The rating matrix $\hat R=(r_{ui}) \in R^{n_u\times n_i}$ can consequently be written as $\hat R=P^TQ$.

The question is: How do we obtain the user and item vectors in the feature space?

Given the set $\kappa$ of user-item pairs $(u,i)$ of ratings $r_{ui}$ that users have given to items, we want to find user and item vectors that explain these ratings (careful: while $\hat r_{ui}$ is used to denote our estimation of the user-item rating, the notion $r_{ui}$ indicates that we have obtained the rating from the user explicitly).

A straightforward approach is to solve the following optimization problem:
$\min\limits_{P,Q} \sum\limits_{(u,i)\in \kappa} (r_{ui}- p_u^T q_i)^2$
The problem is that this, as an optimization problem, is very badly posed: It tries to find matrices $P$ and $Q$ explaining the data we have have gotten, but it leaves users that have not given any rating to any item undefined (arbitrary), and does the same for unrated items. Also, one should take into account that the rating a user gives to an item certainly depends on the user’s mood and therefore contains noise. That means: We are less interested in finding the actual solution to this problem than in finding a “good” solution. The way around this problem that also helps finding a solution is to regularize the problem to:
$\min\limits_{P,Q} \sum\limits_{(u,i)\in \kappa} (r_{ui}- p_u^T q_i)^2 + \lambda \cdot ( \sum\limits_{u=1}^{n_u} \Vert p_u \Vert^2 + \sum\limits_{i=1}^{n_i} \Vert q_i \Vert^2)$
Here, $\lambda>0$ is the regularization constant. The larger  is chosen, the more “important” it is for the solution to find small vectors $p_u$ and $q_i$. Smaller values of $\lambda$ put more importance in finding values that match the given ratings. Refinements to this approach that take into consideration various aspects of different users and items can be found in the article.

The attentive reader may have noticed that besides the choice of $\lambda$, there exists another constant that we have to chosen carefully, and this is the dimension of the feature space, $f$. In fact, in a sufficiently huge feature space, we can find user and item vectors that perfectly explain the ratings $r_{ui}$ and set all other rating estimations to 0, so that we do not gain any information:
Let $R$ be the matrix containing the actual ratings $r_{ui}$ for $(u,i)\in \kappa$ , and 0 for $(u,i)\notin \kappa$. If $f$ is chosen as $f:=n_i$, then $P=R^T$, $Q=I$, with $I$ being the unit matrix in $R^{n_i\times n_i}$, does the job: it holds $R=P^TQ$.
Summarizing, the dimension of the feature space has to be chosen big enough for the latent factor vectors to carry information, but also small enough to prevent overfitting (notably smaller than $n_i$). The SVD approach that we will introduce in a subsequent post may help with this problem.