Over the past twenty years, camera networks have become increasingly popular. In response to various demands imposed on these networks, several coverage models have been developed in the scientific literature, such as area, trap, barrier, and target coverage. In this paper, a new type of coverage task, the Maximum Target Coverage with k-Barrier Coverage (MTCBC-k) problem, is defined. Here, the goal is to cover as many moving targets as possible from time step to time step while continuously maintaining k-barrier coverage over the region of interest (ROI). This approach is different from independently solving the two tasks and then merging the results. An Integer Linear Programming (ILP) formulation for the MTCBC-k problem is presented. Additionally, two types of camera clustering methods have been developed. This approach allows for solving smaller ILPs within clusters, and combining their solutions. Furthermore, a polynomial-time greedy algorithm has been introduced as an alternative to solve the MTCBC-k problem. An example was also provided of how the aforementioned methods can be modified to handle a more realistic scenario, where only the targets detected by the cameras are known, rather than all the targets within the ROI. The simulations were run with both dense and sparse camera placements, convincingly supporting the usefulness of the clustering and greedy methods.
Border surveillance and the monitoring of critical infrastructure are essential components of regional and industrial security. In this paper, our purpose is to study the intricate nature of surveillance methods used by hybrid monitoring systems utilizing Pan–Tilt–Zoom (PTZ) cameras, modeled as directional sensors, and UAVs. We aim to accomplish three occasionally conflicting goals. Firstly, at any given moment we want to detect as many intruders as possible with special attention to newly arriving trespassers. Secondly, we consider it equally important to observe the temporal movement and behavior of each intruder group as accurately as possible. Furthermore, in addition to these objectives, we also seek to minimize the cost of sensor usage associated with surveillance. During the research, we developed and analyzed several interrelated, increasingly complex algorithms. By leveraging RL methods we also gave the system the chance to find the optimal solution on its own. As a result we have gained valuable insights into how various components of these algorithms are interconnected and coordinate. Building upon these observations, we managed to develop an efficient algorithm that takes into account all three criteria mentioned above.
This paper presents a Reinforcement Learning (RL) framework for Command: Modern Operations (CMO), an advanced Real Time Strategy (RTS) game that simulates military operations. CMO challenges players to navigate tactical, operational, and strategic decision-making, involving the management of multiple units, effective resource allocation, and concurrent action assignment. The primary objective of this research is automating and enhancing military decision-making, utilizing the capabilities of RL. To achieve this goal, a parameterized Proximal Policy Optimization (PPO) agent with a unique architecture has been developed, specifically designed to address the unique challenges presented by CMO. By adapting and extending methodologies from achievements in the domain, such as AlphaStar and OpenAI Five, the agent showcases the potential of RL in military simulations. Our model can handle a wide range of scenarios presented in CMO, marking a significant step towards the integration of Artificial Intelligence (AI) with military studies and practices. This research establishes the groundwork for future explorations in applying AI to defense and strategic analysis.