**Research**

**User-base-station association control**

We consider a multi-cell network as depicted, where each base-station in the network has a battery and can potentially harvest energy from renewable resources such as wind or solar. Also, we assume that all the base-stations are connected to the power grid with time-varying energy prices. Each user in the network has the flexibility of associating with one of many base-stations. We focus on the problem of how to dynamically control the battery levels at each base-station and dynamically form associations between users and base-stations taking into account energy harvesting and user traffic dynamics. The objective is to reduce the operational cost of the network, which in turn can translate to a lower-cost for end-customers and a reduced carbon emission footprint of the cellular infrastructure.

**Base-station availability control**

Consider a network of renewable energy based base-stations. The base-stations can conserve energy by switching off when not in use. Each base-station has a energy cost for turning on and it takes a certain amount of time for turning on. We are interested in developing online algorithms for minimization of the overall energy cost of operating the network of base-stations. We are considering a generic model of future knowledge where we assume that user demands in a certain number of future slots are precisely known. We seek to develop provably efficient online algorithms and to prove that the algorithms are tight (no algorithm with better performance bound can be designed).

**Multi-stage real-time power-generator decisions for multi-cell base stations**

We focus on the multi-stage real-time power-generator commitment decisions for multi-cell base stations in smart grid with renewable resources. In the U.S., the Independent Systems Operator (ISO) takes the responsibility of maintaining the safety of the grid. In our model, the ISO manages the commitment for multiple electricity generators who have various generation capacity and costs. In addition, we assume that each generator has a ramping capacity which defines the upper bound of the generation difference between two consecutive time slots. Such ramping capacity can be deterministic or random due to the uncertainty of renewable resources and highly influence the commitment decisions. The objective of the ISO is to minimize the total generation cost and guarantee that the total power generation matches the demand at each time slot, despite that the actual generation cost of each generator is private information. The ISO asks all the generators to submit a bid of their cost at the beginning and assign the commitment to generators by minimizing the total submitted cost. We assume all the generators are strategic players and profit-oriented. The objective is to design a mechanism operated by the ISO such that a profile of bids from all generators can form a Nash Equilibrium and the total actual generation cost is minimized at the same time.

**Data Freshness**

We have also studied how to in an energy efficient manner manage the freshness of status updates sent from a BS to a user. A proper metric of data freshness at the monitor is the status update age, which is defined as how old the freshest update is since the moment this update was generated at the BS. A logical policy is the zero wait policy, i.e., the source submits a fresh update once the server is free, which achieves the maximum throughput and the minimum average delay at the cost of energy consumption. Surprisingly, this zero wait policy does not always minimize the average age, even when we don’t take energy into account. This motivates us to study how to optimally control the status updates to keep data fresh and to understand when the zero wait policy is optimal. We introduce a penalty function to characterize the level of "dissatisfaction" on data staleness, and formulate the average age penalty minimization problem as a constrained semi-Markov decision process (SMDP) with an uncountable state space. Despite this difficulty, we develop efficient algorithms to find the optimal status update policy. We precisely characterize how much one should wait before submitting a new update in order to be age optimal under energy constraints. To the best of our knowledge, this is the first optimal control policy which is proven to minimize the age-of-information in status update systems.

**Delay and Energy Minimization in Parallel Slow-fading Channels**

We investigate the problem of delay and energy minimization problem in parallel slow-fading channels. The transmitter has no channel state information, and channel codes can be used to reduce decoding error and improve throughput. In the literature, the reliability-throughput tradeoff in such systems was characterized by diversity-multiplexing tradeoff. Later, these results were generated to diversity-multiplexing-delay tradeoff, where the "delay" denotes the transmission delay, i.e., the number of retransmissions before successfully decoding. However, the total delay in communication systems include both the queueing delay and the transmission delay. In order to minimize the total delay, one need to examine the channel coding schemes through the lens of queueing theory. In this research, we investigate the optimal coding control scheme for delay and energy minimization. We found that if the packet transmission time has a heavier-than-exponential tail, it is better to use replication coding across all channels; if the packet transmission time has a lighter-than-exponential tail, it is better to not use any coding. In addition, we found that the optimal file service priority depends on the delay metrics that are minimized: the optimal priority rules for minimize the average delay, maximum delay, and maximum lateness are small remaining file first, earliest due time first, and first-comes first-served. It has been shown that the coding control policies designed by following these principles are near delay-optimal. These results are established for general system settings and delay metrics which allow for arbitrary arrival processes, arbitrary job sizes, arbitrary soft deadlines, and heterogeneous channel conditions.

**Content Update Scheduling Algorithm**

In the latest mobile devices, a myriad of mobile applications are designed to update their contents and information in the background for the provision of better user experience. However, such updates typically performed unawarely and frequently may waste the energy consumption of both cellular base stations (BSs)/wireless access points and mobile devices, especially when the corresponding applications are not accessed by the user in the near future. An immediate yet important question arising here is what is the best strategy to feed or update the contents in the background for the provision of the best user experience given the energy constraints of BSs and mobile devices. To tackle this problem, we first formulate a formal contents update scheduling problem that minimizes user inconvenience under an energy constraint. Given our observations of usage statistics, we model the problem as a Constrained Markov decision process (C-MDP). However, it is well-known that C-MDP is hard to solve by using the solution techniques of MDP such as dynamic programming. To provide a solution, we first consider an Unconstrained MDP (U-MDP) and devise a low-complexity optimal backward induction algorithm by exploiting threshold structure of optimal policies. Then, we further develop an iteration algorithm based on Dekker’s method, in order to find a Lagrange multiplier by which the corresponding U-MDP becomes equivalent to the original C-MDP with a given constraint.

**Uplink Packet Decoding **

There is increasing interest in coordinating the actions of base-stations for optimizations related to the data path as well as the control path. The CoMP (Coordinated Multipoint) activities related to next generation cellular systems is focused on similar efforts. We focused on the problem of decoding packets in the uplink with coordination across base-stations but without any requirement of tight time-synchronization. To meet the growing demand for wireless data, it is time to move away from the age-old paradigm of prohibiting interfering nodes from transmissions. Instead, through proactive management of interference among multiple colliding packets, we can design high throughput wireless systems. This is well explored in the Information Theory community and there are also a few implementational efforts that have been recently reported. The existing solutions are non-trivial to use in real systems as they require either tight time/frequency synchronization or exchange of data between transmitters prior to the transmissions. These requirements are hard to meet in practice especially for uplink transmissions.