/* NEW VISUAL EFFECTS */ #bg-canvas { position: fixed; top: 0; left: 0; width: 100%; height: 100%; z-index: 0; opacity: 0.5; pointer-events: none; } #cursor { width: 30px; height: 30px; border: 2px solid var(--red); border-radius: 50%; position: fixed; pointer-events: none; z-index: 10000; transition: transform 0.1s cubic-bezier(0.19, 1, 0.22, 1); mix-blend-mode: difference; } #cursor-dot { width: 8px; height: 8px; background: var(--gold); border-radius: 50%; position: fixed; pointer-events: none; z-index: 10000; box-shadow: 0 0 10px var(--red); } .layer-title h1 { text-shadow: 0 0 30px rgba(255, 60, 60, 0.5), 0 0 60px rgba(255, 215, 0, 0.3); animation: breathe 4s ease-in-out infinite; } @keyframes breathe { 0%, 100% { text-shadow: 0 0 30px rgba(255, 60, 60, 0.5); } 50% { text-shadow: 0 0 50px rgba(255, 60, 60, 0.8), 0 0 80px rgba(255, 215, 0, 0.5); } } /* Panel Tilt Effect */ .left-panel, .right-panel { transition: transform 0.1s; transform-style: preserve-3d; perspective: 1000px; } .left-panel:hover, .right-panel:hover { box-shadow: 0 0 40px rgba(255, 60, 60, 0.15); border-color: rgba(255, 60, 60, 0.5); }
Post-ReLU Regularization
Dropout prevents overfitting by randomly deactivating neurons during training, forcing the network to learn redundant representations.
Operates on [64×24] tensor (64 CNN channels × 24 timesteps) after ReLU activation. This is a 2D dropout — random elements across both channels and time are masked before BiLSTM ingestion.
Scaling active neurons by 1/(1-p) during training ensures expected values remain consistent at inference time.
Training with dropout is like training an ensemble of sub-networks, improving generalization.