Humans are able to spontaneously and rapidly extract information about the temporal structure of event sequences (Maheu et al. 2019). Statistical learning is one mechanism by which the brain is able to segment structured sequences, such as continuous speech, into meaningful units – even when the only cues to locating word boundaries lie in the transitional probabilities between individual syllables (Saffran et al. 1996, Aslin et al. 1998). We set out to investigate the perceptual consequences of this automatic and implicit segmentation process. We also measure statistical learning using both online (target detection) and offline (explicit word recognition) tasks, to gain better insight into the sensitivity of these metrics for evaluating implicit learning. Our results suggest that participants successfully segment the continuous stream of syllables at the underlying word boundaries, evidenced in both online and offline task performance. Intriguingly, online and offline learning scores are weakly correlated, suggesting that the strategies used for performing these tasks are distinct. We asked participants to compare the speed of structured and random streams before and after exposure to auditory statistical learning. On that task, we did not observed changes in the perceived rate despite evidence of implicit segmentation. Ongoing studies seek to further define the mechanisms underlying sequence learning and their consequences for perception through more sensitive tasks.