In recent years, machine learning models based on Transformer components have gained prominence in solving Partial Differential Equations (PDEs). The current approaches, however, lack several desirable properties of a neural PDE solver such as (i) generalization to PDE parameters not seen during training, (ii) spatial and temporal zero-shot super-resolution, (iii) continuous-temporal extrapolation, (iv) applicability to PDEs of different dimensionalities, and (v) expedited inference for extended rollouts. Motivated by these limitations, we propose a time-continuous transformer model that enables all of the above desirable properties of a neural PDE solver. Crucially, we fuse two classes of methods proposed in recent years: Neural Fields which learn the solution of the PDE, and Neural Operators which learn mappings between function spaces. We achieve this by conditioning the neural field on a current state of the physical system.