Yaqiao Li, Pierre McKenzie, Perspective on complexity measures targeting read-once branching programs,
Information and Computation (2024).
arXiv:2305.11276.
click for a Summary:
Branching program (aka, binary decision diagram) is a more flexible counterpart of decision tree that represents/computes Boolean functions. It is: (i) a basic model for studying computational space, (ii) closely related to pseudorandomness, (iii) a fundamental data structure, (iv) useful to synthesize circuits and in formal verification, etc.
This paper studied the minimum size of read-once branching programs from the perspective of proposing lower bound complexity measures, and demonstrated their strengths as well as limitations, such as on the Tseitin formula (which is applicable in proof complexity), and on the Tree Evaluation Problem (which is studied for understanding a fundamental open problem in complexity: Logspace vs Polytime), etc.
click for a Summary:
Is quantum computing more powerful than classical computing? Recently, many quantum computers built showed superior computational advantage.
This paper, using the lifting technique in communication complexity, proved that in the two-way finite automata model, quantum computing is more time-space efficent than classical computing.
click for a Summary:
Information complexity, incorporating Shannon's information theory into communication models, is a continuous version of communication complexity. Initially, it was used to help study communication complexity, and later found many other applications such as in optimization and in circuit complexity, etc.
This paper, as the 2nd part, completed the basic theory of the dependency of information complexity on the error allowed in computation. This in particular led to an an accurate formula of error-allowed information complexity of XOR function for a special distribution.
click for a Summary:
If you use a decision tree to compute a composition of two Boolean functions f and g, how does this relate to a decision tree computing f and a decision tree computing g? The measure of conflict complexity was proposed to study (a randomized version of) this problem. Block sensitivity is another complexity measure that tells how a function's value is affected by changing a block of its input.
This paper showed block sensitivity is no greater than conflict complexity, and posed an open problem concerning conflict complexity and certificate complexity, solution of which would have interesting consequences.
click for a Summary:
Information complexity, incorporating Shannon's information theory into communication models, is a continuous version of communication complexity. Initially, it was used to help study communication complexity, and later found many other applications such as in optimization and in circuit complexity, etc.
This paper extended the optimal procol for two-party AND function in Braverman et. al. to multi-party, in doing so we had to invent various reductions and do meticulous calculation and computation.
click for a Summary:
Information complexity, incorporating Shannon's information theory into communication models, is a continuous version of communication complexity. Initially, it was used to help study communication complexity, and later found many other applications such as in optimization and in circuit complexity, etc.
This paper, as the 1st part, initiated the basic theory of the dependency of information complexity on the error allowed in computation. This led to a more accurate formula of the randomized communication complexity of the important disjointness function, and a solution of an open problem of Braverman.
Online Computation
Yaqiao Li,
Ali Mohammad Lavasani,
Denis Pankratov, Online interval selection on a simple chain,
submitted.
click for a Summary:
Interval selection models many resource-optimizing scheduling and other conflict-resovling problems. This work shows that a natural revoking algorithm in fact performs worse than the greedy algorithm, when the input intervals form a simple chain and arrive randomly. The solution relies on a clever choice of a partition, and on calculating the resulting infinite series via solving differential equations. We also obtain other bounds, in particular, a first nearly tight bound for advice complexity in this model. A fascinating challenge (we have unsuccessful attempts) is to understand the performance of the revoking algorithm on general interval graphs.
click for a Summary:
Computing in cloud has many advantages that leads to overall better social welfare, e.g., some companies such as in gaming industry rent servers from public cloud companies for their computational tasks. A basic problem is how to schedule diverse requirements to minimize cost of running cloud servers, this is a dynamic vector bin packing problem called RSiC.
This paper, among other results, proposed a new and natural greedy-type algorithm that we call 'Greedy', and demonstrated in simulation that it outperforms other existing algorithms in the majority of cases. Thus, 'Greedy' could be useful for industrial applications.
click for a Summary:
Matching is a common task in nature, society, and mathematics. Avoiding crossing (of matching lines/curves) is a desired sometimes indispensable condition, such as for circuit design and understanding RNA structures, etc.
This paper introduced the online and weighted version of the non-crossing matching problem, and gave optimal or competitive online algorithms and lower bounds in various models. In particular, we found a beautiful 1/3-competitive randomized algorithm, which uses a binary tree to guide its matching decisions.
Yaqiao Li,
Denis Pankratov, Online Vector Bin Packing and Hypergraph Coloring Illuminated: Simpler Proofs and New Connections,
LAGOS 2023,
arXiv:2306.11241.
click for a Summary:
Bin packing is a classical NP-complete optimizaiton problem that is also of practical importance such as in loading cargo for shipping, creating file backups in media, etc.
This paper introduced the online incidence matrix associated with the online hypergraph, which naturally led to a simple reduction from online hypergraph coloring to online vector bin packing, thus simplified a previous breakthrough by Azar et. al.. This paper also showed a 'diverse' multi-family has an interesting intersecting property, thus resolved a conjecture on online hypergraph coloring by Nagy-György and Imreh.
click for a Summary:
Many large networks are constructed online, such as social networks and internet. Such graphs can be modelled by online graph, where at each time step a vertex appears together with its set of edges to some of existing vertices. A fundamental question is: suppose an online graph has completed its construction and turns out to be k-colorable, then, how many colors do we need if we have to irrevocably color every vertex as soon as it appears? A line of work including Lovász et. al. and Gutowski et. al. gives satisfactory understanding for bipartite graphs. For 3-colorable graphs nontrivial progress has also been made, but far from satisfactory.
Motivated to explore the strength and weakness of existing techniques, this paper introduced the bounded components model. One one hand, we discovered that the efficiency for coloring online bipartite graph is accurately determined by the number of connected components during the online construction. In comparison, we showed that bounded components is not a serious constraint for 3-colorable graphs.
(pure) Mathematics
Yaqiao Li,
Ali Mohammad Lavasani,
Mehran Shakerinava,
Newman's theorem via Carathéodory,
arXiv:2406.08500.
click for a Summary:
Randomness is useful, sometimes necessary, e.g., you do not want to play a simple deterministic strategy in rock paper scissors. Randomness could help save communication cost as well. Now, if you can share randomness with your communicating partner, versus that you each have private randomness, is there a difference? Newman says they do differ, but not much.
This paper gave a new short proof of Newman's theorem in communication complexity by applying the classical and the approximate Carathéodory's theorems in convex geometry.
click for a Summary:
This paper, extending the idea in Impagliazzo et.al., gave a short information theoretic proof for Chang's lemma via Pinsker's inequality.
click for a Summary:
Random graph is an important model in mathematics, physics, computer science, and in studying epidemics such as COVID-19. The Erdős–Rényi random graph has its edges generated independently with some probability p. A graph is quasi-random if it has similar statistics of subgraph-counts with the Erdős–Rényi random graph, e.g., the number of triangles should be close to n^3 p^3.
Although graphs seem combinatorial in nature, this paper used an analytic tool called Walsh expansion to understand the structure of functions satisfying certain integration property, the structure discovered led to resolving some of the conjectures on quasi-random graphs by Janson and Sós. Pierre Deligne (Fields Medalist) wrote a letter commenting our structure theorem.
Yaqiao Li,
The winning property of mixed badly approximable numbers,
Moscow J. Comb. Number Theory, vol 3 issue 1 (2013).
arXiv:1212.6584.
click for a Summary:
This paper improved the winning property of mixed badly approximable numbers to the best possible, thus answered in affirmative a question in a survey of N. Moshchevitin.
click for a Summary:
Diversity is observed to be beneficial in social and working environment, and UN calls biodiversity "our strongest natural defense against climate change". Darwin and Wallace already discovered that ecosystem diversity, via niche differentiation, supports higher productivity and better invasion resistance, than a monoculture. Schelling in his original study of segregation of residential neighborhoods (Dynamic models of segregation, 1971) has already, perhaps presciently, experimented diversity-seeking agents (of two types only), called "integrationist", and discovered "phenomena that are not present in the case of purely separatist demands", e.g., "integration requires more complex patterning than separation". Hayes (The Math of Segregation, 2013) observed that "the social fabric of segregation has begun to fray" and "the time has come for a model of racial reintegration".
This paper, parallel to the famous Schelling game, modelled diversity-seeking agents and studied their swap games, whose equilibria are measured by several diversity measures we proposed. The theory and simulation both showed that segregation would be effectively removed if agents become diversity-seeking, though strong diversity measures such as evenness is still hard to achieve.
Dingyu Zhang,
Yaqiao Li,
Nadia Bhuiyan,
A Tale of Two Organizational Structures,
pdf.
click for a Summary:
An organization/team consists of diverse people often of distinct capabilities, this we call the team structure. A team makes its decision by aggregating individuals' decisions according to certain pre-speficied rules (e.g., polyarchy, hierarchy, committee, etc) , this we call the decision structure. If an organization has a fixed team, how should it chooses its decision structure? Similarly, if an organization has a fixed decision structure (e.g., cannot be easily changed), what types of people it should hire? More generally, if both sturctures are subject to design (e.g., for a new company), how does an organization systematically consider its various choices?
This paper, using university faculty hiring as a guiding example, modeled these two structures into one framework and studied carefully the interplay between the two, and how they together influence on an organization's performance. Some findings are surprising to the extent of counter-intuitive at the first sight.
click for a Summary:
Knowledge of how humans move can help intelligent robots in tasks involving an interactive human environment, such as navigating through a crowded street, or playing sports or tabletop games with humans.
This paper, inspired by ideas in cognitive science, proposed a new and general approach to solve human motion understanding via pattern completion on a learned latent representation space. Our model outperforms current state-of-the-art in human motion prediction across a number of tasks, with no customization. The architecture uses a new and generic autoencoder based on sequence-to-sequence learning. See here for more introduction of this work by the first author Yi Tian Xu.