Distributed ledgers provide many advantages over centralized solutions in IoT projects, including but not limited to improved security, transparency, and fault tolerance. To leverage distributed ledgers at scale, their well-known limitation, i.e., performance, should be adequately analyzed and addressed. DAG-based distributed ledgers have been proposed to tackle the performance and scalability issues by design. The first among them, IOTA, has shown promising signs in addressing the above issues. IOTA is an open-source distributed ledger designed for IoT. It uses a directed acyclic graph to store transactions on its ledger, to achieve a potentially higher scalability over blockchain based distributed ledgers. However, due to the uncertainty and centralization of the deployed consensus, the current IOTA implementation exposes some performance issues, making it less performant than the initial design. In this paper, we first extend an existing simulator to support realistic IOTA simulations and investigate the impact of different design parameters on IOTA’s performance. Then, we propose a layered model to help the users of IOTA determine the optimal waiting time to resend the previously submitted but not yet confirmed transaction. Our findings reveal the impact of the transaction arrival rate, tip selection algorithms (TSAs), weighted TSA randomness, and network delay on the throughput. Using the proposed layered model, we shed some light on the distribution of the confirmed transactions. The distribution is leveraged to calculate the optimal time for resending an unconfirmed transaction to the distributed ledger. The performance analysis results can be used by both system designers and users to support their decision making.