Comming seminars

  • Viresh Patel (University of Amsterdam): Decomposing tournaments into paths

    22.1.2018 10:00 @ Applied Mathematical Logic

    In this talk we consider a generalisation of Kelly's conjecture which is due Alspach, Mason, and Pullman from 1976. Kelly's conjecture states that every regular tournament has an edge decomposition into Hamilton cycles, and this was proved by K{"u}hn and Osthus for all sufficiently large tournaments. The conjecture of Alspach, Mason, and Pullman concerns general tournaments and asks for the minimum number of paths needed in an edge decomposition of each tournament into paths. There is a natural lower bound for this number in terms of the degree sequence of the tournament and they conjecture this bound is correct for tournaments of even order. Almost all cases of the conjecture are open and we prove many of them. This is joint work with Allan Lo, Jozef Skokan, and John Talbot.

  • David Coufal (ICS CAS): Convolution kernel networks for function approximation

    22.1.2018 14:00 @ Hora Informaticae

    In the area of radial basis neural networks (RBF networks), various bump functions are used to represent network’s computational units such as Gaussians or inverse multiquadrics. The universal approximation property (UAP property) for RBF networks guarantees the existence of a network (setting of its parameters) that approximates arbitrarily well any function from a given class e.g., from the class of continuous functions on a compact domain. This well-known result (Park, Sandberg – Neural Comp. 1991) is based on the assumption that the widths of computational units can be arbitrarily shrinked. If so, then almost all reasonable bump functions can be used to make an RBF network to exhibit the UAP property. However, if the unit widths are fixed, as it is the case for the convolution kernel neural networks, the situation gets more complex. It turns out that if one wants to preserve the UAP property, it is necessary to examine the behavior of the multi-dimensional Fourier transform of the computational units; and there are bump functions that are no longer suitable for building a neural network. In the lecture, we present in more details the theorem that leads to the employment of the Fourier transform and concrete examples of its computation for commonly used bump functions.