Iterative Splits of Quadratic Bounds for Scalable Binary Tensor Factorization
Beyza Ermis, Guillaume Bouchard
Abstract:
Binary matrices and tensors are popular data
structures that need to be efficiently approxi
mated by lowrank representations. A standard
approach is to minimize the logistic loss, well
suited for binary data. In many cases, the num
ber m of nonzero elements in the tensor is much
smaller than the total number n of possible en
tries in the tensor. This creates a problem for
large tensors because the computation of the lo
gistic loss has a linear time complexity with n. In
this work, we show that an alternative approach is
to minimize the quadratic loss (root mean square
error) which leads to algorithms with a training
time complexity that is reduced from O(n) to
O(m), as proposed earlier in the restricted case
of alternating leastsquare algorithms. In addi
tion, we propose and study a greedy algorithm
that partitions the tensor into smaller tensors,
each approximated by a quadratic upper bound.
This technique provides a timeaccuracy trade
off between a fast but approximate algorithm and
an accurate but slow algorithm. We show that
this technique leads to a considerable speedup in
learning of real world tensors.
Keywords:
Pages: 192199
PS Link:
PDF Link: /papers/14/p192ermis.pdf
BibTex:
@INPROCEEDINGS{Ermis14,
AUTHOR = "Beyza Ermis
and Guillaume Bouchard",
TITLE = "Iterative Splits of Quadratic Bounds for Scalable Binary Tensor Factorization",
BOOKTITLE = "Proceedings of the Thirtieth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI14)",
PUBLISHER = "AUAI Press",
ADDRESS = "Corvallis, Oregon",
YEAR = "2014",
PAGES = "192199"
}

