We propose a generative model for single–channel EEG that incorporates the constraints experts actively enforce during visual scoring. The framework takes the form of a dynamic Bayesian network with depth in both the latent variables and the observation likelihoods—while the hidden variables control the durations, state transitions, and robustness, the observation architectures parameterize Normal–Gamma distributions. The resulting model allows for time series segmentation into local, reoccurring dynamical regimes by exploiting probabilistic models and deep learning. Unlike typical detectors, our model takes the raw data (up to resampling) without pre–processing (e.g., filtering, windowing, thresholding) or post–processing (e.g., event merging). This not only makes the model appealing to real–time applications, but it also yields interpretable hyperparameters that are analogous to known clinical criteria. We derive algorithms for exact, tractable inference as a special case of Generalized Expectation Maximization via dynamic programming and backpropagation. We validate the model on three public datasets and provide support that more complex models are able to surpass state–of–the–art detectors while being transparent, auditable, and generalizable.