1 Introduction
Mixed packing and covering linear programs (LPs) are a natural class of LPs where coefficients, variables, and constraints are nonnegative. They model a wide range of important problems in combinatorial optimization and operations research. In general, they model any problem which contains a limited set of available resources (packing constraints) and a set of demands to fulfill (covering constraints).
Two special cases of the problem have been widely studied in literature: pure packing, formulated as ; and pure covering, formulated as where are all nonnegative. These are known to model fundamental problems such as maximum bipartite graph matching, minimum set cover, etc LubyN93 . Algorithms to solve packing and covering LPs have also been applied to great effect in designing flow control systems BartalBR04 , scheduling problems PlotkinST95 , zerosum matrix games Nesterov05 and in mechanism design ZurelN01 . In this paper, we study the mixed packing and covering (MPC) problem, formulated as checking the feasibility of the set: , where are nonnegative. We say that is an approximate solution to MPC if it belongs to the relaxed set . MPC is a generalization of pure packing and pure covering, hence it is applicable to a wider range of problems such as multicommodity flow on graphs Young01 ; Sherman17 , nonnegative linear systems and Xray tomography Young01 .
General LP solving techniques such as the interior point method can approximate solutions to MPC in as few as iterations  however, they incur a large periteration cost. In contrast, iterative approximation algorithms based on firstorder optimization methods require iterations, but the iterations are fast and in most cases are conducive to efficient parallelization. This property is of utmost importance in the context of evergrowing datasets and the availability of powerful parallel computers, resulting in much faster algorithms in relatively lowprecision regimes.
1.1 Previous work
In literature, algorithms for the MPC problem can be grouped into two broad categories: widthdependent and widthindependent. Here, width is an intrinsic property of a linear program which typically depends on the dimensions and the largest entry of the constraint matrix, and is an indication of the range of values any constraint can take. In the context of this paper and the MPC problem, we define and as the maximum number of nonzeros in any constraint in and respectively. We define the width of the LP as .
One of the first approaches used to solve LPs was Langrangianrelaxation: replacing hard constraints with loss functions which enforce the same constraints indirectly. Using this approach, Plotkin, Schmoys and Tardos
PlotkinST95 and Grigoriadis and Khachiyan GrigoriadisK96 obtained widthdependent polynomialtime approximation algorithms for MPC. Luby and Nisan LubyN93 gave the first widthdependent parallelizable algorithm for pure packing and pure covering, which ran in parallel time, and total work. Here, parallel time (sometimes termed as depth) refers to the longest chain of dependent operations, and work refers to the total number of operations in the algorithm.Young Young01 extended this technique to give the first widthindependent parallel algorithm for MPC in parallel time, and total work^{1}^{1}1 here is the maximum number of constraints that any variable appears in.. Young Young14 later improved his algorithm to run using total work . Mahoney et al. MahoneyRWZ16 later gave an algorithm with a faster parallel runtime of .
The other most prominent approach in literature towards solving an LP is by converting it into a smooth function Nesterov05 , and then applying general firstorder optimization techniques Nesterov05 ; Nesterov12 . Although the dependence on from using firstorder techniques is much improved, it usually comes at the cost of suboptimal dependence on the input size and width. For the MPC problem, Nesterov’s accelerated method Nesterov12 , as well as Bienstock and Iyengar’s adaptation BienstockI06 of Nesterov’s smoothing Nesterov05 , give rise to algorithms with runtime linearly depending on , but with far from optimal dependence on input size and width. For pure packing and pure covering problems, however, AllenZhu and Orrechia AllenZhuO19 were the first to incorporate Nesterovlike acceleration while still being able to obtain nearlinear widthindependent runtimes, giving a time algorithm for the packing problem. For the covering problem, they gave a time algorithm, which was then improved to by WangRM16 . Importantly, however, the above algorithms do not generalize to MPC.
1.2 Our contributions
We give the best parallel widthdependent algorithm for MPC, while only incurring a linear dependence on in the parallel runtime and total work. Additionally, the total work has nearlinear dependence on the inputsize. Formally, we state our main theorem as follows.
Theorem 1.1.
There exists a parallel approximation algorithm for the mixed packing covering problem, which runs in parallel time, while performing total work, where is the total number of nonzeros in the constraint matrices, and is the width of the given LP.
Table 1 compares the running time of our algorithm to previous works solving this problem (or its special cases).
Parallel Runtime  Total Work  Comments  

Young Young01  is columnwidth  
Bienstock and Iyengar BienstockI06  widthdependent  
Nesterov Nesterov12  widthdependent  
Young Young14  
Mahoney et al. MahoneyRWZ16  
This paper  widthdependent 
Sacrificing width independence for faster convergence with respect to precision proves to be a valuable tradeoff for several combinatorial optimization problems which naturally have a low width. Prominent examples of such problems which are not pure packing or covering problems include multicommodity flow and densest subgraph, where the width is bounded by the degree of a vertex. In a large number of realworld graphs, the maximum vertex degree is usually small, hence our algorithm proves to be much faster when we want highprecision solutions. We explicitly show that this result directly gives the fastest algorithm for the densest subgraph problem on lowdegree graphs in Appendix C.
2 Notation and Definitions
For any integer , we represent using the
norm of any vector. We represent the infinitynorm as
. We denote the infinitynorm ball (sometimes called the ball) as the set . The nonnegative part of this ball is denoted as . For radius , we drop the radius specification and use a short notation and . We denote the extended simplex of dimension as . For any , if . Further, for any set , we represent its interior, relative interior and closure as and , respectively. Function is applied to a vector element wise. Division of two vectors of same dimension is also performed element wise.For any matrix , we use to denote the number of nonzero entries in it. We use and to refer to the th row and th column of respectively. We use notation or alternatively to denote element in th row and th column of matrix . denotes the operator norm . For a symmetric matrix and an antisymmetric matrix , we define an operator as is positive semidefinite. We formally define an approximate solution to the mixed packingcovering (MPC) problem as follows.
Definition 2.1.
We say that is an approximate solution of the mixed packingcovering problem if satisfies , and .
Here, denotes a vectors of ’s of dimension for any integer .
The saddle point problem on two sets and can be defined as follows:
(1) 
where is some bilinear form between and . For this problem, we define the primaldual gap function as . This gap function can be used as measure of accuracy of the above saddle point solution.
Definition 2.2.
We say that is an optimal solution for (1) if .
3 Technical overview
The mixed packingcovering (MPC) problem is formally defined as follows.
Given two nonnegative matrices , find an such that and if it exists, otherwise report infeasibility.
Note that the vector of ’s on the right hand side of packing and covering constraints can be obtained by simply scaling each constraint appropriately. We also assume that each entry in the matrices and is at most one. This assumption, and subsequently the constraints on also cause no loss of generality.^{2}^{2}2This transformation can be achieved by adapting techniques from WangRM16 while increasing dimension of the problem up to a logarithmic factor. Details of this fact are in the Appendix B in the full paper (supplementary file). For the purpose of the main text, we work with this assumption.
We reformulate MPC as a saddle point problem, as defined in Section 2;
(2) 
where .
The relation between the two formulation is shown in Section 4.
For the rest of the paper, we focus on the saddle point formulation (2).
is a piecewise linear convex function.
Assuming oracle access to this “inner" maximization problem,
the “outer" problem of minimizing can be performed using
first order methods like mirror descent, which are suitable
when the underlying problem space is the unit ball.
One drawback of this class of methods is that their rate of convergence,
which is standard for nonaccelerated first order methods on nondifferentiable objectives,
is to obtain an approximate minimizer
of which satisfies , where is the optimal value.
This means that the algorithm needs to access the inner maximization oracle
times,
which can become prohibitively large in the high precision regime.
Note that even though is a piecewise linear nondifferentiable function, it is not a black box function, but a maximization linear functions in . This structure can be exploited using Nesterov’s smoothing technique Nesterov05 . In particular, can be approximated by choosing a strongly convex^{3}^{3}footnotemark: 3 function and considering
This strongly convex regularization yields that is a Lipschitzsmooth^{3}^{3}3
Definitions of Lipschitzsmoothness and strong convexity can be found in many texts in nonlinear programming and machine learning. e.g.
bubeck2014theory . Intuitively, is Lipschitzsmooth if the rate of change of can be bounded by a quantity known as the “constant of Lipschitz smoothness”. convex function. If is the constant of Lipschitz smoothness of then application of any of the accelerated gradient methods in literature will converge in iterations. Moreover, it can also be shown that in order to construct a smooth approximation of , the Lipschitz smoothness constant can be chosen to be of the order , which in turn implies an overall convergence rate of . In particular, Nesterov’s smoothing achieves an oracle complexity of , where where , and denote the sizes of the ranges of their respective regularizers which are strongly convex functions. and can be made of the order of and , respectively. However, can be problematic since belongs to an ball. More on this will soon follow.Nesterov’s dual extrapolation algorithmNesterov07 gives a very similar complexity but is a different algorithm in that it directly addresses the saddle point formulation (2) rather than viewing the problem as optimizing a nonsmooth function . The final convergence for the dual extrapolation algorithm is given in terms of the primaldual gap function of the saddle point problem (2). This algorithms views the saddle point problem as solving variational inequality for an appropriate monotone operator in joint domain . Moreover, as opposed to smoothing techniques which only regularize the dual, this algorithm regularizes both primal and dual parts, hence is a different scheme altogether.
Note that for both schemes mentioned above, the maximization oracle itself has an analytical expression which involves matrixvector multiplication. Hence each call to the oracle incurs a sequential runtime of . Then, overall complexity for both schemes is of order .
The barrier
Note that the both methods, i.e., Nesterov’s smoothing and dual extrapolation, involves a term, which denotes the range of a convex function over the domain of . The following lemma states a lower bound for this range in case of balls.
Lemma 3.1.
Any strongly convex function has a range of at least on any ball.
Since for each member function of this wide class,
there is no hope of eliminating this factor using techniques
involving explicit use of strong convexity.
So, the goal now is to find a function with a small range over balls,
but still act as good enough regularizers to enable accelerated convergence of
the descent algorithm.
In pursuit of breaking this barrier, we draw inspiration
from the notion of area convexity introduced by Sherman Sherman17 .
Area convexity is a weaker notion than strong convexity, however, it
is still strong enough to ensure that accelerated first order methods still go through when
using area convex regularizers. Since this is a weaker notion than strong convexity, we can construct area convex functions which have range of on ball.
First, we define area convexity, and then go on to mention its relevance to the saddle point problem (2). Area convexity is a notion defined in context of a matrix and a convex set . Let .
Definition 3.2.
A function is area convex with respect to a matrix on a convex set iff for any , satisfies
To understand the definition above, first lets look at the notion of strong convexity. is strongly convex if for any two points exceeds by an amount proportional to . Definition 3.2 generalizes this notion in context of matrix for any three points . is areaconvex on set if for any three points , we have exceeds by an amount proportional to the area of the triangle defined by the convex hull of .
Consider the case that points are collinear. For this case, the area term (i.e., the term involving ) in Definition 3.2 is 0 since matrix is antisymmetric. In this sense, area convexity is even weaker than strict convexity. Moreover, the notion of area is parameterized by matrix . To see a specific example of this notion of area, consider and . Then, for all possible permutations of , the area term takes a value equal to . Since the condition holds irrespective of the permutation so we must have that But note that area of triangle formed by points is equal to . Hence the area term is just a high dimensional matrix based generalization of the area of a triangle.
Coming back to the saddle point problem (2), we need to pick a suitable area convex function on the set . Since is defined on the joint space, it has the property of joint regularization vis a vis (2). However, we need an additional parameter: a suitable matrix . The choice of this matrix is related to the bilinear form of the primaldual gap function of (2). We delve into the technical details of this in Section 4, however, we state that the matrix is composed of and some additional constants. The algorithm we state exactly follows Nesterov’s dual extrapolation method described earlier. One notable difference is that in Nesterov07 , they consider joint regularization by a strongly convex function which does not depend on the problem matrices but only on the constraint set . Our area convex regularizer, on the other hand, is tailor made for the particular problem matrices as well as the constraint set.
4 Area Convexity for Mixed Packing Covering LPs
In this section, we present our technical results and algorithm for the MPC problem, with the end goal of proving Theorem 1.1. First, we relate an approximate solution to the saddle point problem to an approximate solution to MPC. Next, we present some theoretical background towards the goal of choosing and analyzing an appropriate areaconvex regularizer in the context of the saddle point formulation, where the key requirement of the area convex function is to obtain a provable and efficient convergence result. Finally, we explicitly show an area convex function which is generated using a simple “gadget" function. We show that this area convex function satisfies all key requirements and hence achieves the desired accelerated rate of convergence. This section closely follows Sherman17 , in which the author chooses an area convex function specific to the undirected multicommodity flow problem. Due to space constraints, we relegate almost all proofs to Appendix A and simply include pointers to proofs in Sherman17 when it is directly applicable.
4.1 Saddle Point Formulation for MPC
Consider the saddle point formulation in (2) for MPC problem. Given a feasible primaldual feasible solution pair and for (2), we denote and where . Then, we define a function as
Note that if , then
is precisely the primaldual gap function defined in Section 2. Notice that if is a saddle point of (2), then we have
for all . From above equation, it is clear that for all where and . Moreover, . This motivates the following accuracy measure of the candidate approximate solution .
Definition 4.1.
We say that is an optimal solution of (2) iff
Remark 4.2.
Lemma 4.3.
Let satisfy . Then either
1. is an approximate solution of MPC, or
2. satisfy for all .
4.2 Area Convexity with Saddle Point Framework
Here we state some useful lemmas which help in determining whether a differentiable function is area convex. We start with the following remark which follows from the definition of area convexity (Definition 3.2).
Remark 4.4.
If is area convex with respect to on a convex set , and is a convex set, then is area convex with respect to on .
The following two lemmas from Sherman17 provide the key characterization of area convexity.
Lemma 4.5.
Let symmetric matrix. and .
Lemma 4.6.
Let be twice differentiable on the interior of convex set , i.e., .

If is area convex with respect to on , then for all

If for all , then is area convex with respect to on . Moreover, if is continuous on , then is area convex with respect to on .
In order to handle the operator (recall from Section 2), we state some basic but important properties of this operator, which will come in handy in later proofs.
Remark 4.7.
For symmetric matrices and and antisymmetric matrices and ,

If then .

If and then .

If and then .
Having laid a basic foundation for area convexity, we now focus on its relevance to solving the saddle point problem (2). Considering Remark 4.2, we can write the gap function criterion of optimality in terms of bilinear form of the matrix . Suppose we have a function which is area convex with respect to on set . Then, consider the following jointlyregularized version of the bilinear form:
(3) 
Similar to Nesterov’s dual extrapolation, one can attain convergence of accelerated gradient descent for function in (3) over variable . In order to obtain gradients of , we need access to . However, it may not be possible to find an exact maximizer in all cases. Again, one can get around this difficulty by instead using an approximate optimization oracle of the problem in (3).
Definition 4.8.
A optimal solution oracle (OSO) for takes input and outputs such that
Given as a OSO for a function , consider the following algorithm (Algorithm 1):
Lemma 4.9.
The analysis of this lemma closely follows the analysis of Nesterov’s dual extrapolation.
Note that, each iteration consists of matrixvector multiplications, vector additions, and calls to the approximate oracle. Since the former two are parallelizable to depth, the same remains to be shown for the oracle computation to complete the proof of the runtime in Theorem 1.1.
Recall from the discussion in Section 3 that the critical bottleneck of Nesterov’s method is that diameter of the ball is , which is achieved even in the Euclidean norm. This makes in Lemma 4.9 to also be , which can be a major bottleneck for high dimensional LPs, which are commonplace among realworld applications.
Although, on the face of it, area convexity applied to the saddle point formulation (2)
has a similar framework to Nesterov’s dual extrapolation,
the challenge is to construct a for which we can overcome the above bottleneck.
Particularly, there are three key challenges to tackle:
1. We need to show that existence of a function that is area convex with respect to on .
2. should be such that is not too large.
3. There should exist an efficient OSO for .
In the next subsection, we focus on these three aspects in order to complete our analysis.
4.3 Choosing an area convex function
First, we consider a simple 2D gadget function and prove a “nice" property of this gadget. Using this gadget, we construct a function which can be shown to be area convex using the aforementioned property of the gadget.
Let be a function parameterized by defined as
Lemma 4.10.
Suppose . Then for all and .
Now, using the function , we construct a function and use the sufficiency criterion provided in Lemma 4.6 to show that is area convex with respect to on . Note that our set of interest is not fulldimensional, whereas Lemma (4.6) is only stated for and not for . To get around this difficulty, we consider a larger set such that is full dimensional and is area convex on . Then we use Remark 4.4 to obtain the final result, i.e., area convexity of .
Theorem 4.11.
Let and define
where and , then is area convex with respect to on set . In particular, it also implies is area convex with respect to on set .
Theorem 4.11 addresses the first part of the key three challenges. Next, Lemma 4.12 shows an upper bound on the range of .
Lemma 4.12.
Function then
Finally, we need an efficient OSO. Consider the following alternating minimization algorithm.
Beck15 shows the following convergence result.
Lemma 4.13.
For , Algorithm 2 is a OSO for which converges in iterations.
We show that for our chosen , we can compute the two argmax in each iteration of Algorithm 2 analytically with computation time and hence we obtain a OSO running in total work. Parallelizing matrixvector multiplications, eliminaters the dependence on and , at the cost of another term.
Lemma 4.14.
Each in Algorithm 2 can be computed as follows:
for all .
In particular, we can compute in work and parallel time.
Acknowledgements
We thank Richard Peng for many important pointers and discussions.
References
 (1) AllenZhu, Z., and Orecchia, L. Nearly lineartime packing and covering LP solvers  achieving widthindependence and convergence. Math. Program. 175, 12 (2019), 307–353.
 (2) Bahmani, B., Goel, A., and Munagala, K. Efficient primaldual graph algorithms for mapreduce. In Algorithms and Models for the Web Graph  11th International Workshop, WAW 2014, Beijing, China, December 1718, 2014, Proceedings (2014), pp. 59–78.
 (3) Bartal, Y., Byers, J. W., and Raz, D. Fast, distributed approximation algorithms for positive linear programming with applications to flow control. SIAM J. Comput. 33, 6 (2004), 1261–1279.
 (4) Beck, A. On the convergence of alternating minimization for convex programming with applications to iteratively reweighted least squares and decomposition schemes. SIAM Journal on Optimization 25, 1 (2015), 185–209.
 (5) Bienstock, D., and Iyengar, G. Approximating fractional packings and coverings in o(1/epsilon) iterations. SIAM J. Comput. 35, 4 (2006), 825–854.
 (6) Bubeck, S. Theory of convex optimization for machine learning. arXiv preprint arXiv:1405.4980 15 (2014).
 (7) Charikar, M. Greedy approximation algorithms for finding dense components in a graph. In Proceedings of the Third International Workshop on Approximation Algorithms for Combinatorial Optimization (Berlin, Heidelberg, 2000), APPROX ’00, pp. 84–95.
 (8) Grigoriadis, M. D., and Khachiyan, L. G. Approximate minimumcost multicommodity flows in õ(epsilonknm) time. Math. Program. 75 (1996), 477–482.

(9)
Luby, M., and Nisan, N.
A parallel approximation algorithm for positive linear programming.
In
Proceedings of the TwentyFifth Annual ACM Symposium on Theory of Computing, May 1618, 1993, San Diego, CA, USA
(1993), pp. 448–457.  (10) Mahoney, M. W., Rao, S., Wang, D., and Zhang, P. Approximating the solution to mixed packing and covering lps in parallel o(epsilon^{3}) time. In 43rd International Colloquium on Automata, Languages, and Programming, ICALP 2016, July 1115, 2016, Rome, Italy (2016), pp. 52:1–52:14.
 (11) Nesterov, Y. Smooth minimization of nonsmooth functions. Math. Program. 103, 1 (2005), 127–152.
 (12) Nesterov, Y. Dual extrapolation and its applications to solving variational inequalities and related problems. Math. Program. 109, 23 (2007), 319–344.
 (13) Nesterov, Y. Efficiency of coordinate descent methods on hugescale optimization problems. SIAM Journal on Optimization 22, 2 (2012), 341–362.
 (14) Plotkin, S. A., Shmoys, D. B., and Tardos, É. Fast approximation algorithms for fractional packing and covering problems. Math. Oper. Res. 20, 2 (1995), 257–301.
 (15) Sherman, J. Areaconvexity, l regularization, and undirected multicommodity flow. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal, QC, Canada, June 1923, 2017 (2017), pp. 452–460.
 (16) Wang, D., Rao, S., and Mahoney, M. W. Unified acceleration method for packing and covering problems via diameter reduction. In 43rd International Colloquium on Automata, Languages, and Programming, ICALP 2016, July 1115, 2016, Rome, Italy (2016), pp. 50:1–50:13.
 (17) Young, N. E. Sequential and parallel algorithms for mixed packing and covering. In 42nd Annual Symposium on Foundations of Computer Science, FOCS 2001, 1417 October 2001, Las Vegas, Nevada, USA (2001), pp. 538–546.
 (18) Young, N. E. Nearly lineartime approximation schemes for mixed packing/covering and facilitylocation linear programs. CoRR abs/1407.3015 (2014).
 (19) Zurel, E., and Nisan, N. An efficient approximate allocation algorithm for combinatorial auctions. In Proceedings 3rd ACM Conference on Electronic Commerce (EC2001), Tampa, Florida, USA, October 1417, 2001 (2001), pp. 125–136.
Appendix A Proof of auxiliary results
In this section, we include proofs of lemmas from the main paper. In some cases, the lemmas are direct restatements of results from other papers, for which we provide appropriate pointers.
Proof of Lemma 3.1.
Consider an arbitrary strongly convex function . Assume WLOG that . (otherwise, we can shift it accordingly). We will show that by induction on for set . This suffices because is isomorphic to . The claim holds for by the definition of strong convexity. Now, suppose it is true for . Then there exists such that . Moving units in the last coordinate from in the direction of nonnegative slope, suppose we reach . Then, due to strong convexity of , we have ∎
Proof of Lemma 4.3.
Suppose we are given such that . If there exists which is feasible for MPC then choosing then . Hence we have
where implication follows by optimality over extended simplices . So we obtain, if there exist a feasible solution for MPC then is approximate solution of MPC.
On the other hand, suppose is not an approximate solution. Then
Let such that then we have
Hence, if is not approximate solution of MPC then satisfy for all implying that MPC is infeasible. ∎
Proof of Lemma 4.5.
Let
and .
Then iff iff all principle minors of are nonnegative. Now, implies . It is easy to verify that third principle minor is nonnegative iff . So implies must be invertible. Then, applying Schur complement lemma, we obtain that . Now let then . It is easy to verify that . This implies and . Hence we conclude the proof.
∎
Proof of Lemma 4.6.
This lemma appears exactly as Theorem 1.6 in Sherman17 . The proof follows from the same. ∎
Proof of Proposition 4.7.

Here, the third equivalence follows after replacing by . Hence we conclude the proof of part 1.


implies . Similarly implies . Hence
So we obtain .
∎
Proof of Lemma 4.9.
This lemma appears as Theorem 1.3 in Sherman17 , and the proof follows from the same. ∎
Proof of Lemma 4.10.
We use equivalent characterization proved in Lemma 4.5. We need to show that and for all and . First of all, note that is welldefined on this domain. In particular, we can write
Note that a matrix is PSD if and only if its diagonal entries and determinant are nonnegative. Clearly diagonal entries of are nonnegative for the given values of and . Hence, in order to prove the lemma, it suffices to show that .
Comments
There are no comments yet.