20.2.2019 16:00 @ Applied Mathematical Logic
The uniform interpolation property of the intuitionistic propositional calculus (IPC) was first proved by Pitts in 1992 by means of proof-theoretic methods. We prove an open mapping theorem for the topological spaces dual to finitely presented Heyting algebras. In turn, this yields a short, self-contained semantic proof of Pitts result. Our proof is based on the methods of Ghilardi & Zawadowski. However, it does not require sheaves nor games, only basic duality theory for Heyting algebras. This is joint work with Sam van Gool.
27.2.2019 16:00 @ Applied Mathematical Logic
Recall that in natural deduction each primitive constant is equipped with introduction and elimination rules. Such rules can be given not only for the logical connectives and the quantifiers, but also for the identity predicate: its introduction rule is the reexivity axiom, t = t, and its elimination rule is the indiscernibility of identicals. Although a normalization theorem can be proved for the resulting system, one might not be entirely satisfied with this treatment of identity, especially if one adheres to the idea|going back to Gentzen| that the meaning of a primitive constant is determined by its introduction rule(s). Firstly, it is not obvious that the introduction rule for the identity predicate is strong enough to justify its elimination rule. Secondly, it is not clear what to say about definitions taking the form of equations. Such definitions are usually regarded as axioms, hence they must be additional introduction rules for the identity predicate. Since definitions are particular to theories, it follows that the meaning of the identity predicate changes from one theory to the other. I will show that by enriching natural deduction with a theory of definitional identity we can answer both of these worries: we can justify the elimination rule on the basis of the introduction rule, and we can extend any theory with definitions while keeping the reexivity axiom as the only introduction rule for the identity predicate.
5.3.2019 13:15 @ Computational Methods
In many contemporary optimization problems, such as parameter training for deep learning architectures, it is computationally challenging or even infeasible to evaluate an entire function or its derivatives. This necessitates the use of stochastic algorithms that sample problem data, which can jeopardize the guarantees classically obtained through globalization techniques via a trust region or a line search. Using subsampled function values is particularly challenging for the latter strategy, that relies upon multiple evaluations. On top of that all, there has been an increasing interest for nonconvex formulations of data-related problems. For such instances, one aims at developing methods that converge at a reasonable rate to second-order stationary points, which is particularly delicate to ensure when one only accesses subsampled approximations of the objective and its derivatives. In this talk I present a stochastic algorithm based on negative curvature and Newton-type directions, computed for a subsampling model of the objective. A line-search technique is used to enforce suitable decrease for this model, and for a sufficiently large sample, a similar amount of reduction holds for the true objective. By using probabilistic reasoning, we can then obtain worst-case complexity guarantees for our framework. Furthermore I demonstrate the practical performance of the algorithm on some nonconvex optimization problems arising from training CNNs.