Ultrareliable and low-latency communication (URLLC) is a prerequisite for the successful implementation of the Internet of Controllable Things. In this article, we investigate the potential of deep reinforcement learning (DRL) for joint subcarrier-power allocation to achieve low latency and high reliability in a general form of device-to-device (D2D) networks, where each subcarrier can be allocated to multiple D2D pairs and each D2D pair is permitted to utilize multiple subcarriers. We first formulate the above problem as a Markov decision process and then propose a double deep Q-network (DQN)-based resource allocation algorithm to learn the optimal policy in the absence of full instantaneous channel state information (CSI). Specifically, each D2D pair acts as a learning agent that adjusts its own subcarrier-power allocation strategy iteratively through interactions with the operating environment in a trial-and-error fashion. Simulation results demonstrate that the proposed algorithm achieves near-optimal performance in real time. It is worth mentioning that the proposed algorithm is especially suitable for cases where the environmental dynamics are not accurate and the CSI delay cannot be ignored.