diff options
author | Peter Zhu <peter@peterzhu.ca> | 2020-08-24 10:53:11 -0400 |
---|---|---|
committer | Aaron Patterson <aaron.patterson@gmail.com> | 2020-08-25 10:14:10 -0700 |
commit | 326d89b7cee05b33e6f73fb293a4ae9d5af6f7f2 (patch) | |
tree | 5b202c43f68f3ebd407bb7567729a16cc98660ce /gc.c | |
parent | 8c030b5c007fe300d78f93a5c3e29f7c44d042cb (diff) |
Correctly account for heap_pages_final_slots so it does not underflow
`rb_objspace_call_finalizer` creates zombies, but does not do the correct accounting (it should increment `heap_pages_final_slots` whenever it creates a zombie). When we do correct accounting, `heap_pages_final_slots` should never underflow (the check for underflow was introduced in 39725a4db6b121c7779b2b34f7da9d9339415a1c).
The implementation moves the accounting from the functions that call `make_zombie` into `make_zombie` itself, which reduces code duplication.
Notes
Notes:
Merged: https://github.com/ruby/ruby/pull/3450
Diffstat (limited to 'gc.c')
-rw-r--r-- | gc.c | 10 |
1 files changed, 7 insertions, 3 deletions
@@ -2597,6 +2597,10 @@ make_zombie(rb_objspace_t *objspace, VALUE obj, void (*dfree)(void *), void *dat zombie->data = data; zombie->next = heap_pages_deferred_final; heap_pages_deferred_final = (VALUE)zombie; + + struct heap_page *page = GET_HEAP_PAGE(obj); + page->final_slots++; + heap_pages_final_slots++; } static inline void @@ -3484,7 +3488,9 @@ finalize_list(rb_objspace_t *objspace, VALUE zombie) } RZOMBIE(zombie)->basic.flags = 0; - if (LIKELY(heap_pages_final_slots)) heap_pages_final_slots--; + GC_ASSERT(heap_pages_final_slots > 0); + GC_ASSERT(page->final_slots > 0); + heap_pages_final_slots--; page->final_slots--; page->free_slots++; heap_page_add_freeobj(objspace, GET_HEAP_PAGE(zombie), zombie); @@ -4324,8 +4330,6 @@ gc_page_sweep(rb_objspace_t *objspace, rb_heap_t *heap, struct heap_page *sweep_ sweep_page->free_slots = freed_slots + empty_slots; objspace->profile.total_freed_objects += freed_slots; - heap_pages_final_slots += final_slots; - sweep_page->final_slots += final_slots; if (heap_pages_deferred_final && !finalizing) { rb_thread_t *th = GET_THREAD(); |