# 2007-08 Colloquia talks

Date | Speaker | Talk |
---|---|---|

Tuesday, April 22, 2008 | Iain Moffatt, University of Waterloo, Canada | Knot Theory and Ribbon Graphs
Abstract: In this talk I will give an overview of my work on the interaction between knot and graph polynomials. Starting with Kauffman's well-known state model for the Jones polynomial, I will briefly describe ribbon graphs and their polynomials, and show how these arise naturally in knot theory. I will go on to describe the connections between some well-known knot polynomials and graph polynomials, which allow the study of graph polynomials using knot theory and vice versa. I will describe several of my contributions to this area, addressing topics such as determination and equivalence, categorification, and, more generally, applications of knot theory to graph theory (and the other way round) to show how the interaction between knots and graphs can be mutually beneficial and productive. |

Thursday, April 17, 2008 | Thomas Michael Keller, Texas State University | Derived Length, Character Degrees, and Conjugacy Class Sizes
Abstract: It has long been known that the derived length of a finite solvable group G is bounded above by some function depending only on the number |cd(G)| of irreducible complex character degrees of G. It is conjectured that the best possible such function is logarithmic. We will present the currently known results on this conjecture and also consider the related question for conjugacy class sizes in place of character degrees. As time permits, we will also discuss the special case of normally monomial p-groups of maximal class. |

Tuesday, April 15, 2008 | Maria Audi Byrne, Mitchell Cancer Institute, University of South Alabama | Mathematical Models of Pattern Formation in Biology
Abstract: The role of mathematics in biological pattern formation is to identify unifying themes among patterns and describe them mathematically. Biological phenomena provide a trove of spatiotemporal patterns that involve common themes across multiple species, spatial scales and contexts. Self-organized alignment, a pervasive theme that applies to wound healing, embryonic development and bird flocking, admits several mathematical descriptions. In this talk, I will present several models of alignment, including a mean-field PDE and a stochastic discrete lattice gas. |

Friday, April 11, 2008 | David Benko, University of Western Kentucky | Uniform Approximation by Weighted and Homogeneous Polynomials
Abstract: Approximation by weighted polynomials with varying weights was introduced by Saff. Kroo raised the question whether we can uniformly approximate a continuous even function on a convex origin symmetric curve by homogeneous polynomials. In the talk we will analyse the relationship between these two approximation problems. |

Thursday, April 10, 2008 | Lillian Yau, Tulane University | A Flowgraph Competing Risks Model with an Application on Kidney Transplants
Abstract: Flowgraph models are versatile data analytical tools for multi-state time-to-event data that form a semi-Markov process. Most of the recent researches and applications are within Bayesian framework. In this talk, we model two competing risks using flowgraph, and provide the lower bound of standard errors for maximum likelihood estimates of a covariate. Score tests are performed for the significance of the parameters. The method is illustrated using kidney transplant data collected at Tulane Abdominal Transplant Institute of Tulane Medical Center in New Orleans, Louisiana. |

Thursday, April 3, 2008 | John Stufken, University of Georgia | Maximin Universally Optimal Block Designs in the Presence of a Trend
Abstract: We consider experiments for comparing treatments using units that are ordered linearly over time or space within blocks. In addition to the block effect, we assume that a trend effect influences the response. The latter is modeled as a smooth component plus a random term that captures departures from the smooth trend. The model is flexible enough to cover a variety of different situations, for instance, most of the effects may be either random or fixed. The information matrix for a design will be a function of several variance parameters. While data will shed light on the values of these parameters, at the design stage they are unlikely to be known, so we suggest a maximin approach in which a minimal information matrix is maximized. We derive maximin universally optimal designs and study their robustness. These designs are based on semibalanced arrays. Special cases capture results available in the literature. |

Tuesday, March 25, 2008 | Issam Louhichi, University of Toledo | Commutativity of Toeplitz Operators on the Bergman Space
Abstract: This talk would be a report on the work I have done with L. Zakariasy and my current work with N. V. Rao. The subject is the product of Toeplitz operators in the Bergman space of the unit disk, commutativity between them, roots and powers of individual operators. One of the most interesting results we have obtained is as follows: If two Toeplitz operators commute with a quasihomogeneous Toeplitz operator, then they commute with each other. |

Friday, March 21, 2008 | Sergiy Borodachov, Georgia Institute of Technology | Asymptotic Behavior of the Minimal Riesz Energy on Rectifiable Sets
Abstract: Given a compact rectifiable set A embedded in Euclidean space, we investigate minimal Riesz energy points on A, i.e. N points constrained to A and interacting via the power law potential |x-y|^-s, where s > 0 is a fixed parameter. Our main results concern the behavior of the leading term of the minimal energy as N gets large as well as the limiting distribution of the corresponding ground state configurations when s is greater than the Hausdorff dimension of A. We also consider the case of weighted power law potential. Our results provide a method for generating N-point configurations on A that are "well-separated" and have a given non-uniform asymptotic distribution. This is joint work with Ed Saff and Doug Hardin from Vanderbilt University. |

Thursday, March 20, 2008 | John P. Morgan, Virginia Tech | Designing an Experiment in Two Blocks
Abstract: Faced with cost, time, or other pressures to keep an experiment small, blocking can be an effective tool for increasing precision of treatment comparisons. The simplest implementation of blocking is a division of experimental units into two equi-sized subsets, allocating one degree of freedom to explain unit heterogeneity. Small experiments will have block size k smaller than the number of treatments v being compared. This talk attacks the problem of optimal allocation of treatments to two small, equi-sized blocks. Solutions depend on the optimality criterion employed as well as the ratio k/v. This work evolved in response to a design request from an engineer working with vehicle traction on sandy surfaces. The talk will begin with a discussion of that experiment and the evolution of its design. We will touch on practical issues such as unrealistic expectations, the destructive effects of sloppy experimental procedure, and the considerable gap between textbook design and the engineer's needs. |

Thursday, March 6, 2008 | Xiangrong Yin, University of Georgia | Sparse Dimension Reduction: Sparse MAVE and Its Extension
Abstract: In this talk, focusing on the sufficient dimensions in the regression mean function, we combine the ideas of sufficient dimension reduction and variable selection to propose a shrinkage estimation method, sparse MAVE. The sparse MAVE can exhaustively estimate dimensions in the mean function, while selecting informative covariates simultaneously without assuming any particular model or particular distribution on the predictor variables. Furthermore, we propose a modified BIC criterion to effectively estimate the dimension of the mean function. Extensions of sparse MAVE to two-type of predictors are also discussed. The efficacy of sparse MAVE is verified through simulation studies and via analyses of real data sets. |

Thursday, February 28, 2008 | Jaromy Kuhl, University of West Florida | Avoiding Partial Latin Squares
Abstract: Let P be an n x n array of symbols. P is called avoidable if for every set of n symbols, there is an n x n Latin square L on these symbols so that corresponding cells in L and P differ. We present an argument that shows all partial Latin squares of odd order of at least nine are avoidable. This completes a conjecture that states all partial Latin squares of order at least four are avoidable. We then ask the following question: Given an n x n partial Latin square P with some specified structure, is there an n x n Latin square L of the same structure for which L avoids P? We answer this question in the context of generalized sudoku squares. |

Thursday, February 14, 2008 | Bin Wang, University of South Alabama | Bootstrapping High Percentiles with Generalized Bootstrap
Abstract: In this talk, we briefly discuss the traditional parametric bootstrap (PB), the traditional non-parametric bootstrap (NPB) and introduce a generalized bootstrap (GB). GB is a general form of parametric bootstrap which embeds the generalized lambda distribution fitting technique in the bootstrap procedure. It has advantages over PB and NPB in that GB needs less assumptions in the function form of the underlying distribution than PB, and it is more efficient than NPB. Simulation results shows that GB works well for samples with very small sizes and even if the parameter(s) to be estimated violates(e) smoothness requirement. |

Thursday, February 7, 2008 | Xin-Min Zhang, University of South Alabama | Fractal Dimensions of Sierpinski Pedal Triangles
Abstract: Sierpinski triangles (ST) are well-known fractal sets. People encountered this interesting figures in many seemingly different and independent branches of pure and applied mathematics. In this talk, we shall explain how to generalize Sierpinski triangles to a 2-parameter family of self-similar fractals, the so-called Sierpinski pedal triangles (SPT). The construction of a SPT uses the pedal triangle of the generating triangle (while the construction of a ST uses the midpoint triangle) and its fractal dimension depends on the shape of the generating triangle. When the generating triangle is equilateral, the resulting SPT is the same as the resulting ST. The fractal dimension function D(x,y) of these SPT's is an implicitly defined symmetric function of two real variables over a symmetric domain. It attains its global minimum at (pi/3,pi/3). That is, among all SPT's generated by acute triangles, the ordinary Sierpinski triangle has the least fractal dimension ln 3/ln 2. As a by-product of the ideas used in the proof of the above assertion, we provide two useful results on the optimization of implicitly defined symmetric functions of n variables. This talk will be presented at a very elementary level (first year calculus is all you need to follow most of the talk), and everyone is encouraged to attend. |

Thursday, January 24, 2008 | Boris Kalinin, University of South Alabama | Measure Rigidity for Commuting Differentiable Maps
Abstract: We will consider examples of algebraic and non-algebraic differentiable maps of compact manifolds whose iterates exhibit some hyperbolic behavior. We will discuss some problems related to the study of measures invariant simultaneously under several commuting maps of this type. |

Thursday, November 29, 2007 | Jiayang Sun, Case Western Reserve University | Tube Method and its Applications
Abstract: In this talk, we first review the tube method as how it connects to probability, statistics and a little bit differential geometry, and present its applications to a large deviation and a great array of statistical problems. We then show our new general simultaneous confidence regions (SCR) for the mean response surface from a spatially-correlated Negative Binomial (NB) regression model. The motivation for the SCR for a NB model came from analyzing neuronal imaging data which are often overdispersed, heterogeneous, and spatially correlated. Some numerical studies and real data applications will also be presented. |

Thursday, November 8, 2007 | John Armstrong, Tulane University | Knot Theory: A Tangled Comedy of Combinatorics and Topology
Abstract: Since its inspiration in physical problems of the 19th century, the story of knots has interwoven techniques from topology and from combinatorics. Here, we will survey this history, highlighting the concepts from either side as the subject has developed. Our aim is to show how the new field of quantum topology picks up both threads of the story, and to speculate how it might reach a happy ending by splicing them both together. |

Thursday, October 25, 2007 | Ameina Summerlin, University of South Alabama | Power, Extension and Multiple Comparison Adjustments for the Lin and Wang Test for
Overall Homogeneity of Weibull Survival Curves
Abstract: In many clinical trials, two treatments are compared to determine if one has a superior effect on survival. Analyzing this type of data can take the form of two survival curves that need to be compared. At least some of the observations are usually right censored and the censoring is assumed to be independent of survival time. There are many methods for comparing two survival curves in the presence of independent right censoring. Two of the most frequently used are the logrank and Wilcoxon tests. However, when hazard rates cross, these methods are known to have little power. Lin and Wang (2004) recently developed a new test for the overall homogeneity of survival curves. In this talk, the results of a detailed power analysis of the Lin and Wang test will be discussed. When more than two treatments are being compared, the data can take the form of several survival curves that need to be compared. The generalization of the Lin and Wang test to test for overall homogeneity of K (K>2) survival curves will be presented. If the overall test for K survival curves determines a difference exists, the next step is to perform multiple comparisons to determine which treatments actually differ. Multiple testing procedures based on the Lin and Wang procedure will be proposed. The power and family wise error rate of these adjustment procedures will be presented. |

Thursday, October 4, 2007 | Xin-Min Zhang, University of South Alabama | Napoleon's Theorem - Mathematics, History, the Emperor's Family, and Mathematicians
Abstract: Napoleon's Theorem is a well-known result in classical Euclidean geometry. It has been rediscovered and reproved many times during the last century. Numerous geometric configurations associated with this theorem have revealed some unexpected aspects of the intrinsic nature of triangle geometry. Nowadays, mathematicians are still fascinated by its simplicity and elegance, and find many analogs of it in different geometric settings. Some Napoleon-like or Napoleon-type theorems have also been established. However, despite having been widely studied, it is still not clear why this theorem is attributed to Napoleon. Also, to which Napoleon should it be attributed? These questions have been debated for perhaps as long as the history of the theorem itself. In this talk, we will review Napoleon's Theorem and its related geometric configurations. We will also take a closer look at the Emperor's family to see who else may be related to this theorem. Finally, we will note some mathematicians who have either researched the origins of Napoleon's Theorem or studied its generalizations. |

Thursday, September 27, 2007 | Maria Audi Byrne, Vanderbilt University & Mitchell Cancer Institute, University of South Alabama | Modeling Approaches to Biological Problems
Abstract: Modeling in biology requires the development of new techniques which reflect the uniqueness of biological problems. Discrete particle system models (cellular automata models) are especially well-suited for biological problems in which individual stochastic cellular or molecular interactions play an important role. While these models are flexible and straight-forward to implement, they may be computationally intensive and are difficult to interpret analytically so that large-scale events are best described by continuum PDE models. Since biological problems are often characterized by a range of length scales, hybrid models that incorporate continuum and discrete elements can be very powerful. In this talk, the discrete LGCA and Potts models will be described with examples of these models applied to biological problems, typically with hybrid components. |

Thursday, September 20, 2007 | Srinivas Palanki, Department of Chemical Engineering, University of South Alabama | Robust Control of Multivariable Nonlinear Systems
Abstract: In this research, tools from differential geometry are utilized to develop a robust nonlinear controller methodology for multivariable nonlinear systems that are subject to parametric uncertainty. A nonlinear state feedback is synthesized that approximately linearizes the nonlinear system in an input/output sense by solving a convex optimization problem online. A robust controller is designed for the resulting linear uncertain system using a multi-model H_2/H_? synthesis approach to ensure robust stability and performance. This multi-loop controller design methodology is illustrated via simulation of a chemical reactor control problem. |

Thursday, September 13, 2007 | Bin Wang, University of South Alabama | Weighted Kernel Density Estimation under Right Censoring
Abstract: In survival data analysis, data are usually incomplete due to censoring. Patients may drop out a study for different reasons and the censored survival times are seldom censored randomly. In this paper, we propose to estimate the density based on right-censored data (informative or non-informative censoring) by a generalized kernel density estimator -- weighted kernel density estimator (wKDE). The optimal bandwidth selection problem will be discussed and the performance of the wKDE with difference bandwidth selection criteria will be illustrated via simulation studies. |

Thursday, September 6, 2007 | Karl Heinrich Hofmann, Tulane University & Darmstadt University of Technology, Germany | Why we are interested in the Structure of Connected Pro-Lie Groups and what we know
about them
Abstract: A topological group is a pro-Lie group if it is isomorphic to a closed subgroup of an arbitrary product of finite dimensional real Lie groups. The category of these groups relates to Lie groups as profinite groups relate to finite groups. It is studied because it contains the class of (almost) connected locally compact groups and is a complete category (in contrast to the category of locally compact groups). Every pro-Lie group G has a Lie algebra g which is isomorphic to a closed subalgebra of a product of finite dimensional Lie algebras, and there is an exponential function exp from g to G. This is the basis of a Lie theory and a fairly detailed structure theory for pro-Lie groups. Certain otherwise familiar tools present problems, for instance, the passage to quotient groups, the lifting on one-parameter subgroups, the Open Mapping Theorem. |