2004-05 Colloquia talks

 

Date Speaker Talk
Thursday, April 21, 2005 Madhuri Mulekar, University of South Alabama Sequential Sampling Plans to Estimate Mean Number of Individuals per Sampling Unit

Abstract: In agriculture, efforts are made to maximize the net revenue. Pest populations, such as insects and weeds, reduce yield, but control of these pests can be expensive. In general, control is only recommended when the pest population exceeds the economic threshold, the point at which the anticipated loss in revenue from the pests exceeds the cost of control. One element in the study of population dynamics is precise estimation of the density of pest and beneficial species. The most commonly used measure of precision in sequential estimation of the mean is the coefficient of variation D of the sample mean. In this seminar, methods commonly used within agriculture to sequentially estimate the population mean with a specified coefficient of variation D for the binomial, negative binomial, and Poisson distributions will be reviewed, and some new approaches will be considered. To illustrate some of the methods, data collected from a 2 hectare field near Chickasha, Oklahoma on the number of fleahoppers (Pseudatomoscelis seriatus (Reuter)) on cotton will be used. Although the methods will be discussed in the context of adult cotton fleahoppers, they would apply in an analogous manner to other pest species.

Monday, April 18, 2005 Sergey Belyi, Troy State University The Krein Formula Revisited

Abstract: We will discuss some of the recent results related to the Krein resolvent formula, which describes the resolvent difference of two self-adjoint extensions as a closed symmetric linear operator. The cases we consider include the case when the symmetric operator is not densely defined and the case of the space with the indefinite metric. We show that coefficients in Krein's formula can be expressed in terms of analogues of the von Neumann parameterizations. The properties of Weyl-Titchmarsh functions corresponding to pi-self-adjoint extensions of a pi-symmetric operator are studied. In particular, it is shown that Weyl-Titchmarsh functions corresponding to the above mentioned pi-self-adjoint extensions are connected via linear-fractional transformations with the coefficients presented in terms of von Neumann's parameters. All the concepts are illustrated by examples.

Thursday, April 14, 2005 Brian Boe, University of Georgia Nilpotent Matrices in Lie Algebras

Abstract: It's easy to describe the set of n x n complex matrices whose r-th power is zero: each is conjugate (via an invertible matrix) to a Jordan canonical form matrix with all eigenvalues 0 and blocks of size at most r. In this talk I will discuss work done by the University of Georgia VIGRE Algebra Group on generalizations of this problem. We will consider an arbitrary simple algebraic group G over an algebraically closed field, along with certain embeddings of its Lie algebra g into n x n matrices, and describe the set of elements of g whose r-th power is zero, in terms of G conjugacy classes.

As a corollary, when the characteristic is p, we obtain a description of the "restricted nullcone" of g (the case r=p), which is important in the study of Lie algebra cohomology. When p is a "good prime" (not too small), this verifies, by much more elementary methods, a 2003 result of Carlson, Lin, Nakano, and Parshall. And when p is "bad" (very small), our results are new.

Most of the talk should be accessible to first year graduate students.

Thursday, April 7, 2005 Xia-Song Lin, University of California at Riverside An Unfolding Problem of Polygonal Arcs in 3-Space

Abstract: Motivated by the protein folding problem in molecule biology, we will propose a mathematical problem about the unfolding of a certain kind of polygonal arcs in 3-space. And we will discuss the possibility of extending the method developed in the recent solution of the classical carpenter's ruler problem in the plane to this unfolding problem.

Thursday, March 31, 2005 Albert Chau, Harvard University The Ricci Flow on Non-Compact Kaehler Manifolds

Abstract: Since its introduction by Richard Hamilton in 1982, the Ricci flow has been successfully applied to fundamental problems in topology, Riemannian and Kaehler geometry. In this talk I will discuss some recent general directions in the Ricci flow with an emphasis on applications to complex differential geometry and uniformization theorems on non-compact Kaehler manifolds.

Thursday, March 24, 2005 Sudesh Srivastav, Tulane University Optimality and Construction of Certain Classes of Block Designs

Abstract: In this talk we consider the problem of determining and constructing E-optimal block designs within an experimental setting where v treatments are arranged in b blocks of size k less than v such that bk=vr+1 and r(k-1)=l(v-1)+1. Sufficient conditions for a design to be E-optimal within these classes are derived. An infinite series of generalized group divisible designs with s groups (GGDD(s)), for k=3 and l=1, of such E-optimal designs are also constructed. Also, we define the notion for resolvable balanced incomplete block designs in which m resolvable classes are identical. These designs are introduced as resolvable balanced incomplete block designs with m-identical resolvable classes and will be denoted by RBIBD(m). The structural properties of these designs are discussed. Several results are derived for RBIBD(m)s that generalize many previously known results for the class of resolvable balanced incomplete block designs.

Thursday, March 10, 2005 Tien-Yien Li, Michigan State University Solving Polynomial Systems

Abstract: Solving polynomial systems is a problem frequently encountered in many areas of mathematics, physical sciences, and engineering. For example, Lorenz's system of differential equations in chaos theory has a polynomial system on the right hand side. In this talk we will show how algebraic geometry, homotopy methods (including polyhedral homotopies), and linear programming can be used to find numerically all the isolated roots of a polynomial system.

Monday, March 7, 2005 Yichuan Zhao, Georgia State University Inference for the Mean Residual Life Function and its Application

Abstract: In addition to the distribution function, the mean residual life (MRL) function is the other important function which can be used to characterize a lifetime in survival analysis and reliability. For inference on the MRL function, some procedures have been proposed in the literature. However, the accuracy of such procedures may be low when the sample size is small. An empirical likelihood (EL) inference procedure of the MRL function is proposed and the limiting distribution of the EL ratio for the MRL function is derived. Based on the result, we obtain confidence intervals for the MRL function.

The proportional mean residual life model by Oakes and Dasu (1990) is a regression tool to study the association between the MRL function and its associated covariates. Although semiparametric inference procedures have been proposed in the literature, the accuracy of such procedures may be low when the censoring proportion is relatively large. The EL based semiparametric inference procedure is developed and the EL confidence region is constructed for the regression parameter. The proposed method is further compared with an normal approximation based method through a simulation study.

Thursday, March 3, 2005 Craig Jensen, University of New Orleans Moduli Spaces of Graphs

Abstract: One can form spaces whose points correspond to different graphs or to labeled graphs. Similar moduli spaces can also be constructed, whose points correspond to trees or to graphs with cyclic orderings at each vertex (ribbon graphs). These spaces have interesting applications to the homology of free groups and many related groups. In this relatively non-technical talk, I will present several low dimensional examples of these spaces, talk a bit how they are related to each other, and make some comments about homology.

Thursday, February 24, 2005 Ali Passian, Oak Ridge National Laboratory Surface Modes and their Applications in the Sub-Micron Realm

Abstract: When the distance between two material domains is reduced, the two objects interact via a number of various forces. Such forces depend, among others, upon the local geometry of the two approaching points. This is the principle behind the Scanning Probe Microscope (SPM), where one of the objects (the probe) is a tiny sharp piece of a material that moves very closely to the second object as in the needle of a turntable moving over the ridges of a vinyl. We have employed the spheroidal coordinate system to model a SPM problem, where the first object, the probe, is represented by the surface of a hyperboloid of revolution, and the second object is represented by a confocal hyperbolid. For the region of interest, the wave equation reduces to the Laplace equation, which is separable in the spheroidal system. The solutions involve the conical functions, which are the kernel of the Mehler-Fock integral transforms. We obtain the solutions as an integral over the index of the conical functions, and using integral and series representations of the conical functions, computationally demonstrate several scenarios relevant to SPM experiments.

Thursday, February 17, 2005 Nutan Mishra, University of South Alabama Theory of Optimal Block Designs and Recent Constructions

Abstract: A class of block designs is specified by its parameters. Given a class of block designs D, the objective of the optimal design theory is to choose from D, a design that gives the ìbestî estimator of the parameters of interest. The choice of the optimal design is based on the idea how ìthe bestî is defined. In the field of statistics, most of the time, the idea of ìthe bestî estimator is related to minimizing the variance of this estimator. The optimality arises while estimating many parameters simultaneously. The information matrix of a design well known as C-matrix of the design is used to define different such optimality criteria for the selection of a design. Different optimality criteria are defined by Jack Kiefer and elaborated in his classical article ìConstruction and optimality of Generalized Youden Square designsî (1975). He has shown that Balanced Incomplete Block Designs (hence onwards BIBD) are universally optimal among the given class of designs. The reason is that a BIBD has completely symmetric C-matrix. In a given class of designs, a BIBD may or may not exist. So the researchers started searching for the designs which may not be BIBD, yet their C-matrix satisfy certain optimality criteria and at times such non-BIBD designs do better than the corresponding BIBD. Thus instead of looking for the completely symmetric property in a design, different other properties of the C matrix have been considered. For example support of the experiment, or total number of blocks in the experiment. These efforts gave rise to the incomplete block designs which are not balanced anymore yet they are optimal in their class.

Thursday, February 3, 2005 Xin-Min Zhang, University of South Alabama Dynamical Systems in Classical Geometry

Abstract: In this talk we shall discuss some dynamical systems problems arising from classical geometry. In particular, we will be concerned with some simple geometric transformations and their iterations on triangles and polygons in two-dimensional Euclidean space. The resulting sequences of triangles and polygons could behave well and be convergent, but some could be chaotic and very complicated. Special emphasis will be placed on the following: 1) Sequences of Kasner polygons; 2) convergent sequences of triangles; 3) convergent sequences of polygons; 4) chaotic sequences of triangles; 5) different encounters with pedal triangles. These problems are interesting and important in their own right while they could provide simple geometric models for "chaotic systems". They are also good examples of how to apply linear algebra and group theory to classical geometry. Everyone is welcome and graduate students are especially encouraged to come.

Thursday, January 27, 2005 Dan Silver, University of South Alabama From Nuts to Knots: Scottish Physics and the 19th Century Origins of Knot Theory

Humanities and Social Sciences Colloquium

Thursday, January 20, 2005 Kenneth Roblee, Troy State University Introduction to Extremal Graph Theory

Abstract: We begin with the necessary graph theory basics, and then proceed to a couple of historical extremal graph theoretical questions. We then examine a couple of modern graph theoretical questions which the speaker has investigated, along with their answers. In particular, we discuss the following. An edge-regular graph G of order n is a regular graph with degree d >= 1 such that there exists a nonnegative integer lambda such that every pair of adjacent vertices in the graph has exactly lambda common neighbors. Let ER(n,d, lambda) denote the family of all edge-regular graphs with parameters n, d, and lambda. Given such parameters d and lambda, we answer the extremal question: What is the maximum number n of vertices for graphs in ER(n,d,lambda), and which graphs have this number of vertices?

Wednesday, January 12, 2005 Dulal K. Bhaumik, Center for Health Statistics, University of Illinois at Chicago Confidence Regions for Random-Effects Calibration Curves with Heteroscedastic Errors

Abstract: We construct confidence bounds for a random-effects calibration curve model. An example application is analysis of analytical chemistry data in which the calibration curve contains measurements y for several values of known concentration x in each of q laboratories. Laboratory is considered a random-effect in this design, and the intercept and slope of the calibration curve are allowed to have laboratory-specific values. We (i) develop an appropriate inter-laboratory calibration curve for heteroscedastic data of the type commonly observed in analytical chemistry, (ii) compute a point estimate for an unknown true concentration X when corresponding measured concentrations Y1, Y2, ..., Yq' are provided from q' laboratories (i.e., a subset of the original q laboratories used to calibrate the model, where 1 <= q' <= q), (iii) compute the asymptotic mean and variance of the estimate, (iv) construct a confidence region for X. The methods are then illustrated using both simulated and typical inter-laboratory calibration data. Other relevant applications of the general approach will be highlighted.

Thursday, December 2, 2004 Lee Klingler, Florida Atlantic University and University of Nebraska Finitely Generated Ideals in Rings of Integer-Valued Polynomials

Abstract: Let D be an integral domain with field of fractions Q, and E be a finite non-empty subset of D; we set Int(E,D) = {f(X) in Q[X] : f(E) is a subset of D}, the ring of integer-valued polynomials on D with respect to the subset E. Recall that a Prufer domain is an integral domain in which every non-zero finitely generated ideal is invertible. It is known that D is a Prufer domain if and only if Int(E,D) is a Prufer domain, so that the Int(E,D) construction yields a method for producing new Prufer domains from old. In this talk, we determine the relationship between the minimum number of generators needed for finitely generated ideals of D to that of Int(E,D). As a corollary, we show that iterating the Int(E,D) construction cannot produce a sequence of Prufer domains whose finitely generated ideals require an ever larger number of generators.

Thursday, November 18, 2004 Randall Holmes, Auburn University Quantum Computation: A Simple Example to Illustrate the Power of Quantum Algorithms.

Abstract: A "quantum computer" is a (still only theoretical) computer that registers strings of 0's and 1's by using the states of a quantum object (for instance the plus and minus spin states of an electron). The behavior of such a computer is governed, not by the deterministic laws that govern a traditional digital computer, but by the laws of quantum mechanics. A quantum algorithm utilizes this special behavior of a quantum computer to, for instance, solve problems in polynomial time that require exponential time on a traditional computer. In this talk I will present the Deutsch-Jozsa quantum algorithm, as well as a generalization of it (my joint work with Frederic Texier), and through this simple example, convey, hopefully, a sense of how the dramatic time savings of a quantum algorithm over a traditional algorithm is effected.

Thursday, November 11, 2004 Peter Sin, University of Florida The Doubly Transitive Permutation Representations of Sp(2n,2)

Abstract: The symplectic groups over the field of two elements each have two doubly transitive actions. These actions were discovered in the 19th century and appear in the works of Steiner, Jordan and Riemann. There are many ways to think of these actions; as an affine action on points of a quadric; as the action on sets of quadratic forms; or as the action on the "theta characteristics" of an algebraic curve of genus n. In this talk, we will describe the rich geometric algebra of these group actions and consider the structure of the mod 2 permutation module. In particular we describe filtrations of these modules such that the subquotients have characters which are given by Weyl's character formula from Lie theory. The first half of this talk requires only knowledge of linear algebra and basic group theory. Finally, we will discuss the connections with curves and, if time permits, coding theory.

Friday, November 5, 2004 Richard J. Charnigo, University of Kentucky On a Flexible Information Criterion for Order Selection in Finite Mixture Models

Abstract: Finite mixture models provide easily-interpreted representations of the heterogeneity in physical phenomena and biological processes; yet, finite mixture models pose special challenges to statisticians, especially with regard to estimation of the order (i.e., the number of distinct mixture components). Lindsay (1983) has developed an elegant framework for nonparametric estimation of the mixing distribution (and, hence, of the order) in the absence of a structural parameter common to all mixture components. However, we demonstrate that, under fairly general conditions, incorporation of a structural parameter results in nonexistence of the semiparametric estimate (if no restriction is placed on the structural parameter) or in a degenerate semiparametric estimate (if the structural parameter is not permitted to exceed some upper bound). Thus, a different paradigm for order selection is required to accommodate the presence of a structural parameter. We propose a flexible information criterion (FLIC) by which both the order of a finite mixture model and the value of the structural parameter can be consistently estimated. The FLIC is similar in spirit to the AIC and BIC but is adaptive in the sense that the strength of the penalty is determined by the data, a feature absent from the AIC and BIC. We investigate the performance of the FLIC through simulation experiments and applications to real data sets.

Thursday, October 28, 2004 Sherwin Kouchekian, University of South Alabama Invariant Subspace Problem and Bergman Operators

Abstract: In this talk we will start by giving a historical background to the old famous Invariant Subspace Problem (ISP). Thereafter, we will discuss why the study of Bergman operators has become a focus of much of the recent research in function and operator theory. Specifically, we will explain the link between the structure of invariant subspaces of a Bergman operator and the ISP. Finally, we will briefly touch the subject of unbounded Bergman operators and the corresponding invariant subspace problem by providing some recently obtained results in this direction. The speaker's intention is to make this talk, or at least a good portion of it, accessible to our graduate students.

Wednesday, August 4, 2004 Bettina Eick, Braunschweig University of Technology, Germany Classifying p-Groups by Coclass

 


For colloquium talks from other years click here