Battery cycling v/s Diesel consumption
I have a simple system comprising of DG, load and Battery. The load is fixed to a constant value of 10 kW and the DG rating is 20 kW. Battery is about 100 kWh. I am using cycle charging strategy for dispatch. I have noticed a difference in the way HOMER pro dispatches the battery v/s the HOMER 2.0, which I was using earlier. In HOMER 2.0, the DG runs at 20 kW and the excess 10kW electricity charges the battery. However, in HOMER pro, the DG runs at 16 kW and only 6 kW is used to charge the battery. I was not able to calculate why the 6 kW limit is getting imposed, since all the three formulas for max charging limits give higher values than 6 kW.. Could you please help in understanding the difference? as I understand HOMER pro considers cycling degradation in the battery. How is this accounted for, and is this used in making the dispatch decision? Although both the systems give almost the same LCOE, the diesel consumption is different in both the cases.
Hey Amol, HOMER 2 was a much more simplistic platform for microgrid simulations than HOMER Pro, so it isn't surprising that there are differences between the two. However there are a lot of variables in play in your question, if you could upload your HOMER file to the forum I'd be happy to take a look at it for you and let you know about the results you're seeing.
I have included both the Homer files. the one with _old extension has the battery model imported from the earlier HOMER version. Let me know what you think.
In the Diesel_battery_old.homer was using the kinetic battery model,
while in the Dielsel_battery.homer the battery was built using the Modified Kinetic Battery model.
Please go through the Help section within the software to understand the difference between the two models.
Hope this helps!