Tomoyuki Chikanaga wrote:
> I've written a patch. This patch adds some methods Fiber.default_vm_stacksize, Fiber.default_vm_stacksize=, Fiber#vm_stacksize and add an optional Hash argument to Fiber#initialize, and tests for them.
> You can specify default VM stack size of Fiber created afterward and/or specify individually when create a Fiber.
>
> ex)
>   Fiber.default_vm_stacksize = 16 * 1024
>   
>   Fiber.new do
>     do_something
>   end
>
> or
>
>   Fiber.new(:vm_stacksize => 16 * 1024) do
>     do_simething
>   end

Very nice!

My application uses fibers extensively, and it began to exceed the
default fiber stack during recursive traversal of relatively shallow
trees.  (Not surprising, given the 4K default stack size.)

I had patched cont.c locally as follows:

#define FIBER_STACK_SIZE_SCALE  8  /* need more fiber stack space */

#define FIBER_MACHINE_STACK_ALLOCATION_SIZE  (0x10000 *
FIBER_STACK_SIZE_SCALE)

#define FIBER_VM_STACK_SIZE ((4 * 1024) * FIBER_STACK_SIZE_SCALE)


> Note that the size of VM stack is number of Objects and memsize of stack is vm_stacksize * sizeof(VALUE) bytes.
> Also note that this patch enable to change only VM stack size, but not machine stack size. I think when Fiber is implemented based on makecontext/swapcontext (FIBER_USE_NATIVE=1), machine stack size (default: 64KB) can be configurable. I wonder if procedures like `parse_some_big_xml' in Mike's example also need larger machine stack size? Does anyone have such a testcase?

Can anyone comment on the relationship between the VM stack size and the
machine stack size?

I scaled them both equally to be "safe".

But I don't know how they are related, so I'm not sure if the
corresponding increase to the machine stack was necessary?


Thanks,

Bill